Feb 19 09:44:49 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 09:44:49 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:49 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 09:44:50 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 09:44:51 crc kubenswrapper[4873]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.141184 4873 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146206 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146238 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146249 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146258 4873 feature_gate.go:330] unrecognized feature gate: Example Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146267 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146277 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146285 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146293 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146303 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146311 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146319 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146333 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146341 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146349 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146356 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146364 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146371 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146379 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146387 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146395 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146403 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146411 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146419 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146427 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146434 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146442 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146450 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146457 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146464 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146472 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146480 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146487 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146495 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146502 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146510 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146517 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146525 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146533 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146541 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146549 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146557 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146564 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146572 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146583 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146592 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146602 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146611 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146620 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146628 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146636 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146644 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146652 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146660 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146668 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146675 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146686 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146697 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146705 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146713 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146721 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146729 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146737 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146745 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146761 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146769 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146777 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146788 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146799 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146807 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146817 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.146826 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147815 4873 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147837 4873 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147850 4873 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147862 4873 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147873 4873 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147882 4873 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147893 4873 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147904 4873 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147913 4873 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147922 4873 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147932 4873 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147941 4873 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147951 4873 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147960 4873 flags.go:64] FLAG: --cgroup-root="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147968 4873 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147977 4873 flags.go:64] FLAG: --client-ca-file="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147986 4873 flags.go:64] FLAG: --cloud-config="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.147995 4873 flags.go:64] FLAG: --cloud-provider="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148003 4873 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148016 4873 flags.go:64] FLAG: --cluster-domain="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148025 4873 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148034 4873 flags.go:64] FLAG: --config-dir="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148044 4873 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148053 4873 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148070 4873 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148079 4873 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148088 4873 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148097 4873 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148148 4873 flags.go:64] FLAG: --contention-profiling="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148157 4873 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148167 4873 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148176 4873 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148185 4873 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148196 4873 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148205 4873 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148214 4873 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148223 4873 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148232 4873 flags.go:64] FLAG: --enable-server="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148241 4873 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148252 4873 flags.go:64] FLAG: --event-burst="100" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148261 4873 flags.go:64] FLAG: --event-qps="50" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148270 4873 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148279 4873 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148288 4873 flags.go:64] FLAG: --eviction-hard="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148299 4873 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148308 4873 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148317 4873 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148326 4873 flags.go:64] FLAG: --eviction-soft="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148335 4873 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148343 4873 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148352 4873 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148361 4873 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148370 4873 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148378 4873 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148387 4873 flags.go:64] FLAG: --feature-gates="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148398 4873 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148407 4873 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148416 4873 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148426 4873 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148436 4873 flags.go:64] FLAG: --healthz-port="10248" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148445 4873 flags.go:64] FLAG: --help="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148453 4873 flags.go:64] FLAG: --hostname-override="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148462 4873 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148473 4873 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148483 4873 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148492 4873 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148500 4873 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148509 4873 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148518 4873 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148527 4873 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148536 4873 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148544 4873 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148553 4873 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148562 4873 flags.go:64] FLAG: --kube-reserved="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148571 4873 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148580 4873 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148589 4873 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148597 4873 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148606 4873 flags.go:64] FLAG: --lock-file="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148615 4873 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148624 4873 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148634 4873 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148646 4873 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148656 4873 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148666 4873 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148676 4873 flags.go:64] FLAG: --logging-format="text" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148686 4873 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148695 4873 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148703 4873 flags.go:64] FLAG: --manifest-url="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148712 4873 flags.go:64] FLAG: --manifest-url-header="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148723 4873 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148733 4873 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148743 4873 flags.go:64] FLAG: --max-pods="110" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148752 4873 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148762 4873 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148772 4873 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148780 4873 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148789 4873 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148798 4873 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148807 4873 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148825 4873 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148835 4873 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148844 4873 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148853 4873 flags.go:64] FLAG: --pod-cidr="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148861 4873 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148874 4873 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148882 4873 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148891 4873 flags.go:64] FLAG: --pods-per-core="0" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148900 4873 flags.go:64] FLAG: --port="10250" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148909 4873 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148918 4873 flags.go:64] FLAG: --provider-id="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148928 4873 flags.go:64] FLAG: --qos-reserved="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148936 4873 flags.go:64] FLAG: --read-only-port="10255" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148945 4873 flags.go:64] FLAG: --register-node="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148954 4873 flags.go:64] FLAG: --register-schedulable="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148963 4873 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148977 4873 flags.go:64] FLAG: --registry-burst="10" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148986 4873 flags.go:64] FLAG: --registry-qps="5" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.148994 4873 flags.go:64] FLAG: --reserved-cpus="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149003 4873 flags.go:64] FLAG: --reserved-memory="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149013 4873 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149022 4873 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149032 4873 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149040 4873 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149049 4873 flags.go:64] FLAG: --runonce="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149058 4873 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149068 4873 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149078 4873 flags.go:64] FLAG: --seccomp-default="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149087 4873 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149096 4873 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149128 4873 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149138 4873 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149147 4873 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149156 4873 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149164 4873 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149173 4873 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149182 4873 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149192 4873 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149201 4873 flags.go:64] FLAG: --system-cgroups="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149209 4873 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149223 4873 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149232 4873 flags.go:64] FLAG: --tls-cert-file="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149241 4873 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149252 4873 flags.go:64] FLAG: --tls-min-version="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149261 4873 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149269 4873 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149279 4873 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149288 4873 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149296 4873 flags.go:64] FLAG: --v="2" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149307 4873 flags.go:64] FLAG: --version="false" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149340 4873 flags.go:64] FLAG: --vmodule="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149351 4873 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.149361 4873 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149582 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149592 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149602 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149610 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149623 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149632 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149641 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149650 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149658 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149667 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149676 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149683 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149691 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149699 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149706 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149714 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149722 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149729 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149737 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149748 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149765 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149773 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149780 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149788 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149796 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149804 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149811 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149819 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149827 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149835 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149842 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149850 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149858 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149873 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149882 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149891 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149900 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149909 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149918 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149926 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149933 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149942 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149950 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149957 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149965 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149972 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149981 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149988 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.149996 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150003 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150011 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150018 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150029 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150039 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150048 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150059 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150069 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150078 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150086 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150094 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150128 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150136 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150145 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150152 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150164 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150175 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150183 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150191 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150199 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150207 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.150221 4873 feature_gate.go:330] unrecognized feature gate: Example Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.150248 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.164933 4873 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.164983 4873 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165175 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165207 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165221 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165229 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165259 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165269 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165277 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165286 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165295 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165305 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165313 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165359 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165369 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165377 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165385 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165394 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165401 4873 feature_gate.go:330] unrecognized feature gate: Example Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165409 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165417 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165425 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165433 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165444 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165454 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165463 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165471 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165480 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165488 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165496 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165504 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165512 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165520 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165528 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165537 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165545 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165554 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165561 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165569 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165578 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165585 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165594 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165601 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165611 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165623 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165631 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165640 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165650 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165661 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165671 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165680 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165689 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165697 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165705 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165713 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165721 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165729 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165737 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165744 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165752 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165760 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165767 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165775 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165783 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165791 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165800 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165807 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165816 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165824 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165832 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165840 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165850 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.165860 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.165873 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166097 4873 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166143 4873 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166157 4873 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166169 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166177 4873 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166185 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166193 4873 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166200 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166208 4873 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166216 4873 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166224 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166232 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166240 4873 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166249 4873 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166257 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166265 4873 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166272 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166280 4873 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166288 4873 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166296 4873 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166306 4873 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166317 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166325 4873 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166337 4873 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166358 4873 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166374 4873 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166384 4873 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166393 4873 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166402 4873 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166413 4873 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166422 4873 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166432 4873 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166442 4873 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166451 4873 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166462 4873 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166472 4873 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166481 4873 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166492 4873 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166502 4873 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166512 4873 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166522 4873 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166532 4873 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166541 4873 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166548 4873 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166556 4873 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166564 4873 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166572 4873 feature_gate.go:330] unrecognized feature gate: Example Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166579 4873 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166587 4873 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166595 4873 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166603 4873 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166611 4873 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166619 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166629 4873 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166638 4873 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166646 4873 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166654 4873 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166664 4873 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166674 4873 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166684 4873 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166692 4873 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166701 4873 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166709 4873 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166718 4873 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166726 4873 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166735 4873 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166745 4873 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166754 4873 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166762 4873 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166770 4873 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.166778 4873 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.166791 4873 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.167918 4873 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.173804 4873 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.173965 4873 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.175868 4873 server.go:997] "Starting client certificate rotation" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.175919 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.176823 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 07:45:37.779704611 +0000 UTC Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.176948 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.208542 4873 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.211952 4873 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.214074 4873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.234348 4873 log.go:25] "Validated CRI v1 runtime API" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.278037 4873 log.go:25] "Validated CRI v1 image API" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.281202 4873 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.288288 4873 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-09-39-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.288347 4873 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.329868 4873 manager.go:217] Machine: {Timestamp:2026-02-19 09:44:51.323883527 +0000 UTC m=+0.613315245 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9f3a4afb-9582-465c-ace5-f370996d8eea BootID:c923eae3-7568-4314-b0e0-48838f6e14fe Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:52:39:1c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:52:39:1c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:27:a0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:52:11:88 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b5:49:de Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7a:0a:86 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:a4:4a:63:04:72 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5a:b3:66:2f:b2:c4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.330347 4873 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.330697 4873 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.340015 4873 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.340609 4873 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.340693 4873 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.341201 4873 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.341222 4873 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.343130 4873 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.343190 4873 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.344262 4873 state_mem.go:36] "Initialized new in-memory state store" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.344399 4873 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.350019 4873 kubelet.go:418] "Attempting to sync node with API server" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.350056 4873 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.350095 4873 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.350131 4873 kubelet.go:324] "Adding apiserver pod source" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.350152 4873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.357897 4873 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.378207 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.378225 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.378404 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.378401 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.383171 4873 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.386215 4873 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.402954 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403004 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403020 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403034 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403060 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403075 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403089 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403137 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403161 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403177 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403220 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.403237 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.404562 4873 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.405530 4873 server.go:1280] "Started kubelet" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.407135 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.407518 4873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.407963 4873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.408509 4873 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 09:44:51 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.409983 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.410026 4873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.410269 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:59:13.504771488 +0000 UTC Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.410352 4873 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.410368 4873 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.410490 4873 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.411031 4873 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.411989 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.412134 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412215 4873 factory.go:55] Registering systemd factory Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412241 4873 factory.go:221] Registration of the systemd container factory successfully Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412536 4873 factory.go:153] Registering CRI-O factory Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412572 4873 factory.go:221] Registration of the crio container factory successfully Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412670 4873 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412698 4873 factory.go:103] Registering Raw factory Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.412712 4873 manager.go:1196] Started watching for new ooms in manager Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.413249 4873 manager.go:319] Starting recovery of all containers Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.421038 4873 server.go:460] "Adding debug handlers to kubelet server" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.422382 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.425030 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18959cac18da8839 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:44:51.405441081 +0000 UTC m=+0.694872759,LastTimestamp:2026-02-19 09:44:51.405441081 +0000 UTC m=+0.694872759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432096 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432172 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432185 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432199 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432212 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432224 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432236 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432247 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432266 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432282 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432297 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432312 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432327 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432346 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432364 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432378 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432394 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432408 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432422 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432440 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432463 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432478 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432495 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432508 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432546 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432560 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432580 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432617 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432630 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432642 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432656 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432678 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432691 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432705 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432718 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432734 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432749 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432766 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432779 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432791 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432804 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432816 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432828 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432840 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432852 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432864 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432878 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432891 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432902 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432914 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432927 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432937 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432957 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432971 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432985 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.432998 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433009 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433020 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433033 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433046 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433059 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433069 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433082 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433094 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433132 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433145 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433157 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433168 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433180 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433191 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433202 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433212 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433223 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433234 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433245 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433256 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433266 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433277 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433289 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433300 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433313 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433325 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433337 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433349 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433363 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433379 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433395 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433413 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433431 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433447 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433494 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433515 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433531 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433547 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433573 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433589 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433613 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433628 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433643 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433660 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433677 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433709 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433732 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433750 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433774 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433792 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433811 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433830 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433845 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433863 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433883 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433900 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433917 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433934 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433952 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433969 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.433986 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434001 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434016 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434031 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434046 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434063 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434080 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434095 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434141 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434156 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434173 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434188 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434205 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434219 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.434232 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437199 4873 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437260 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437280 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437317 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437328 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437337 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437345 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437355 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437364 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437373 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437382 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437392 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437402 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437410 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437419 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437428 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437439 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437454 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437499 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437536 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437548 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437559 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437572 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437583 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437594 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437605 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437617 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437630 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437642 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437652 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437662 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437672 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437683 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437693 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437705 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437752 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437763 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437774 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437786 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437800 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437812 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437826 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437837 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437850 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437863 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437874 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437889 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437900 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437911 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437921 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437932 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437944 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437956 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437967 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437977 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437987 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.437999 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438009 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438020 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438033 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438045 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438056 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438068 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438079 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438091 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438130 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438142 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438156 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438166 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438179 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438190 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438203 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438214 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438225 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438237 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438251 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438262 4873 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438273 4873 reconstruct.go:97] "Volume reconstruction finished" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.438281 4873 reconciler.go:26] "Reconciler: start to sync state" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.456521 4873 manager.go:324] Recovery completed Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.472204 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.474093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.474193 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.474217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.477657 4873 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.477685 4873 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.477709 4873 state_mem.go:36] "Initialized new in-memory state store" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.480222 4873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.482852 4873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.482892 4873 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.482920 4873 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.482962 4873 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.483485 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.483536 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.501316 4873 policy_none.go:49] "None policy: Start" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.502789 4873 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.502813 4873 state_mem.go:35] "Initializing new in-memory state store" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.511335 4873 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.556831 4873 manager.go:334] "Starting Device Plugin manager" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.556928 4873 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.556947 4873 server.go:79] "Starting device plugin registration server" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.557487 4873 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.557504 4873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.558016 4873 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.558149 4873 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.558162 4873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.572734 4873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.583128 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.583269 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.584552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.584585 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.584598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.584743 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.585519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.585593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.585606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.585977 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586000 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586075 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586215 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586258 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.586989 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587020 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587054 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587077 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587457 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587489 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587605 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.587979 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588116 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588138 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588888 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.588914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589215 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589239 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.589796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.623617 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.639485 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.639542 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.639578 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.639874 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.639991 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640018 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640068 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640090 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640170 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640251 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640329 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640383 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640405 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640475 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.640497 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.657767 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.658976 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.659025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.659043 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.659076 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.659507 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742238 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742317 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742349 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742475 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742490 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742503 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742408 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742409 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742430 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742626 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742687 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742701 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742726 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742740 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742789 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742817 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742864 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742886 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742965 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743002 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.742658 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743325 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743328 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743353 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743454 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743491 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743416 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743456 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743387 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.743390 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.859640 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.862050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.862142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.862168 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.862210 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:51 crc kubenswrapper[4873]: E0219 09:44:51.862723 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.934642 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.941680 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.963995 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.981476 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: I0219 09:44:51.987396 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.993273 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-309d535853d92ee5ad819d55c1b6516ab847f003f9df4fe93649241faa3724f4 WatchSource:0}: Error finding container 309d535853d92ee5ad819d55c1b6516ab847f003f9df4fe93649241faa3724f4: Status 404 returned error can't find the container with id 309d535853d92ee5ad819d55c1b6516ab847f003f9df4fe93649241faa3724f4 Feb 19 09:44:51 crc kubenswrapper[4873]: W0219 09:44:51.995639 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5a6aaa1da52275ef72223e1cc913627b16fb7e9c85c6eb03286c7e0983329157 WatchSource:0}: Error finding container 5a6aaa1da52275ef72223e1cc913627b16fb7e9c85c6eb03286c7e0983329157: Status 404 returned error can't find the container with id 5a6aaa1da52275ef72223e1cc913627b16fb7e9c85c6eb03286c7e0983329157 Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.010068 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6a852f3be023d389f130d5fc1be20b6f79da82ba302da7b2fc3a80148899a0e7 WatchSource:0}: Error finding container 6a852f3be023d389f130d5fc1be20b6f79da82ba302da7b2fc3a80148899a0e7: Status 404 returned error can't find the container with id 6a852f3be023d389f130d5fc1be20b6f79da82ba302da7b2fc3a80148899a0e7 Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.013247 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1376a3893a8ddccebdda3e39aff9f82a51b9e977dcd795cfc3a7f568c1393570 WatchSource:0}: Error finding container 1376a3893a8ddccebdda3e39aff9f82a51b9e977dcd795cfc3a7f568c1393570: Status 404 returned error can't find the container with id 1376a3893a8ddccebdda3e39aff9f82a51b9e977dcd795cfc3a7f568c1393570 Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.016587 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d824b161819a08cd245fbd6a87da0a77ef2cd5e8c57e068bdeee515d83c95545 WatchSource:0}: Error finding container d824b161819a08cd245fbd6a87da0a77ef2cd5e8c57e068bdeee515d83c95545: Status 404 returned error can't find the container with id d824b161819a08cd245fbd6a87da0a77ef2cd5e8c57e068bdeee515d83c95545 Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.025391 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.263663 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.266633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.266678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.266690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.266718 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.267222 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.339142 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.339245 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.408358 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.411378 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:01:30.687324868 +0000 UTC Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.487711 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a852f3be023d389f130d5fc1be20b6f79da82ba302da7b2fc3a80148899a0e7"} Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.489043 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a6aaa1da52275ef72223e1cc913627b16fb7e9c85c6eb03286c7e0983329157"} Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.490489 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"309d535853d92ee5ad819d55c1b6516ab847f003f9df4fe93649241faa3724f4"} Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.491913 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d824b161819a08cd245fbd6a87da0a77ef2cd5e8c57e068bdeee515d83c95545"} Feb 19 09:44:52 crc kubenswrapper[4873]: I0219 09:44:52.493287 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1376a3893a8ddccebdda3e39aff9f82a51b9e977dcd795cfc3a7f568c1393570"} Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.718330 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.718776 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.733900 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.733997 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.826747 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Feb 19 09:44:52 crc kubenswrapper[4873]: W0219 09:44:52.872707 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:52 crc kubenswrapper[4873]: E0219 09:44:52.872783 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.068356 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.069945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.069995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.070006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.070036 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:53 crc kubenswrapper[4873]: E0219 09:44:53.070611 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.233910 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 09:44:53 crc kubenswrapper[4873]: E0219 09:44:53.235003 4873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.408165 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.412470 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:37:45.223057532 +0000 UTC Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.497961 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b" exitCode=0 Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.498124 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.498366 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.499064 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.499089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.499098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500069 4873 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e" exitCode=0 Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500130 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500198 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.500998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.502527 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.502574 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.502588 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.502599 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.502667 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.503613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.503642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.503654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.504351 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279" exitCode=0 Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.504412 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.504494 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.505004 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.505024 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.505034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.505995 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d" exitCode=0 Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.506030 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d"} Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.506056 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.506543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.506582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.506591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.510388 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.516077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.516163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:53 crc kubenswrapper[4873]: I0219 09:44:53.516199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.408816 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.413565 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:07:44.838091617 +0000 UTC Feb 19 09:44:54 crc kubenswrapper[4873]: E0219 09:44:54.427651 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="3.2s" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.515051 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.515138 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.515150 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.515257 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.516306 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.516338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.516349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.519543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.519572 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.519583 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.519591 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.522536 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5" exitCode=0 Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.522572 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.522656 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.523366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.523392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.523401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.525766 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526168 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526298 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a"} Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526574 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526936 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526968 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.526980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: W0219 09:44:54.664004 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:54 crc kubenswrapper[4873]: E0219 09:44:54.664081 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.671755 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.672914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.672959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.672974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.673017 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:54 crc kubenswrapper[4873]: E0219 09:44:54.673616 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Feb 19 09:44:54 crc kubenswrapper[4873]: W0219 09:44:54.749089 4873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Feb 19 09:44:54 crc kubenswrapper[4873]: E0219 09:44:54.749191 4873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Feb 19 09:44:54 crc kubenswrapper[4873]: I0219 09:44:54.876343 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.414329 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:03:31.027898458 +0000 UTC Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.533954 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7"} Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.534024 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.534762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.534795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.534805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536239 4873 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673" exitCode=0 Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536339 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536363 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536710 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673"} Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536770 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536955 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.536986 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537172 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537193 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537729 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.537736 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.538030 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.538052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:55 crc kubenswrapper[4873]: I0219 09:44:55.538063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.086676 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.415377 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:52:23.00816072 +0000 UTC Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.551306 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.551359 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.551910 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d61ec168f9dd0b793cb746abec5fd0922cbabf69b71ae55d81c0523be8fac9d9"} Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.551998 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"72381fa08fdd672973e1c6e175291b3d89a8f375117c23c7efc939fead3f62ba"} Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552024 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b37236dcd27e2d6048ee4976e2f411df9eaf77c3bff9f07f06243e70b01cb63a"} Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552042 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e2155b0ec1873858607c3d52b287a5533d1bbf707b7a39b4ca93e73177b2b98"} Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552058 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.552794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.553548 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.553578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.553588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.657978 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.761832 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.762005 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.763198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.763262 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:56 crc kubenswrapper[4873]: I0219 09:44:56.763287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.416564 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:06:24.772723623 +0000 UTC Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.522880 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.561024 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"95584c308545e3ea96b9d6bd6712f4e8d355baeae70edf62e3499a0e984a13af"} Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.561088 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.561193 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.561130 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562555 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.562634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.874256 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.875573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.875625 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.875647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.875680 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.876627 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.876687 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.998030 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.998298 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.999792 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:57 crc kubenswrapper[4873]: I0219 09:44:57.999842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:57.999861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.036455 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.416832 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:35:43.029480335 +0000 UTC Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.564177 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.564190 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.565722 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.565782 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.565806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.566508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.566573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.566595 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.654920 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.655729 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.657703 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.657761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.657780 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:58 crc kubenswrapper[4873]: I0219 09:44:58.665607 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.417548 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:52:38.370784259 +0000 UTC Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.427012 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.504494 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.567356 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.567369 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.567531 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.568960 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569300 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569326 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:59 crc kubenswrapper[4873]: I0219 09:44:59.569737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:00 crc kubenswrapper[4873]: I0219 09:45:00.418584 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:36:38.915530947 +0000 UTC Feb 19 09:45:01 crc kubenswrapper[4873]: I0219 09:45:01.419513 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:46:21.363816371 +0000 UTC Feb 19 09:45:01 crc kubenswrapper[4873]: E0219 09:45:01.573534 4873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 09:45:02 crc kubenswrapper[4873]: I0219 09:45:02.419662 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:51:13.242572976 +0000 UTC Feb 19 09:45:03 crc kubenswrapper[4873]: I0219 09:45:03.420634 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:12:44.15179874 +0000 UTC Feb 19 09:45:04 crc kubenswrapper[4873]: I0219 09:45:04.421704 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:09:43.366389459 +0000 UTC Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.408784 4873 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.423015 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:04:24.316389056 +0000 UTC Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.446437 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38292->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.446706 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38292->192.168.126.11:17697: read: connection reset by peer" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.499148 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.499210 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.504361 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.504421 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.585514 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.586931 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7" exitCode=255 Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.586970 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7"} Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.587147 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.588061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.588088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.588134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.588543 4873 scope.go:117] "RemoveContainer" containerID="70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.740645 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.740823 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.743567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.743597 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.743609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:05 crc kubenswrapper[4873]: I0219 09:45:05.774817 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.424126 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:18:44.467177582 +0000 UTC Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.591508 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.593473 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.593425 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90"} Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.593810 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.594500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.594533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.594546 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.595206 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.595252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.595266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.608914 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.666160 4873 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]log ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]etcd ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-filter ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-apiextensions-informers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-apiextensions-controllers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/crd-informer-synced ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-system-namespaces-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/bootstrap-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/start-kube-aggregator-informers ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-registration-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-discovery-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]autoregister-completion ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-openapi-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 19 09:45:06 crc kubenswrapper[4873]: livez check failed Feb 19 09:45:06 crc kubenswrapper[4873]: I0219 09:45:06.666226 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.425095 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:33:10.128661185 +0000 UTC Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.595540 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.596519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.596549 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.596561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.877231 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 09:45:07 crc kubenswrapper[4873]: I0219 09:45:07.877294 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.006035 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.006303 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.008810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.008863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.008881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.037371 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.037605 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.039264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.039320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.039344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:08 crc kubenswrapper[4873]: I0219 09:45:08.425901 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:18:33.543596538 +0000 UTC Feb 19 09:45:09 crc kubenswrapper[4873]: I0219 09:45:09.426581 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:38:07.226503819 +0000 UTC Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.427435 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:08:06.647487542 +0000 UTC Feb 19 09:45:10 crc kubenswrapper[4873]: E0219 09:45:10.481555 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.485294 4873 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.485369 4873 trace.go:236] Trace[1568776502]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:44:55.698) (total time: 14787ms): Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[1568776502]: ---"Objects listed" error: 14787ms (09:45:10.485) Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[1568776502]: [14.787065808s] [14.787065808s] END Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.485412 4873 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:10 crc kubenswrapper[4873]: E0219 09:45:10.486173 4873 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.487458 4873 trace.go:236] Trace[886226136]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:44:58.170) (total time: 12316ms): Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[886226136]: ---"Objects listed" error: 12316ms (09:45:10.487) Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[886226136]: [12.316509046s] [12.316509046s] END Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.487538 4873 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.488015 4873 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.489532 4873 trace.go:236] Trace[91785162]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:44:56.063) (total time: 14425ms): Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[91785162]: ---"Objects listed" error: 14425ms (09:45:10.489) Feb 19 09:45:10 crc kubenswrapper[4873]: Trace[91785162]: [14.42598717s] [14.42598717s] END Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.489578 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:10 crc kubenswrapper[4873]: I0219 09:45:10.518552 4873 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.362332 4873 apiserver.go:52] "Watching apiserver" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.368909 4873 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.369623 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.370588 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.370906 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.370918 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.372002 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.371378 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.371843 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.371877 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.373245 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.371284 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.384088 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.384248 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.384504 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.384874 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.385009 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.385302 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.385584 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.385765 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.390431 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.411066 4873 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.428203 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:26:41.748402162 +0000 UTC Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.433928 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.450859 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.468063 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.482944 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.494652 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495008 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495254 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495454 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495679 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495946 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496247 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496473 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496693 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496882 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497031 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497289 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497503 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497683 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497865 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498020 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498268 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498456 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495480 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495506 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495682 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.495887 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496145 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496712 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.496888 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497039 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497232 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497542 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497700 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.497862 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498308 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498466 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498625 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498949 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.499507 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.499594 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.499993 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.498647 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.500528 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.500675 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.500851 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.500984 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501009 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501131 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501189 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501331 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501342 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501377 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501421 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501456 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501498 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501586 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501624 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501669 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501705 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501742 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501783 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501826 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501868 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501870 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501909 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501947 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.501981 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502018 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502051 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502084 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502162 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502216 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502260 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502293 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502325 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502359 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502394 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502429 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502461 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502533 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502569 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502602 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502635 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502670 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502708 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502741 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502774 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502810 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502847 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502882 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502916 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502953 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502988 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503021 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503055 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503136 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503174 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503208 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503240 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503281 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503317 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503480 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503521 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503554 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503589 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503623 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502163 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503683 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503721 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503757 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503791 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503825 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503860 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503896 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503932 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503966 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504005 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504078 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504152 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504202 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504260 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504299 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504332 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504364 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504402 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504438 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504473 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504507 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504591 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504624 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504658 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504693 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504735 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504769 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504807 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504841 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504875 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504909 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504941 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504974 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505010 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505044 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505079 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505170 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505206 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505239 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505274 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505313 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505355 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505392 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505431 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505481 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505517 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505552 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505635 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505671 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505712 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505748 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505784 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505820 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505857 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505893 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505930 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505967 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506006 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506041 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506075 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506143 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506200 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506256 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506292 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506327 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506364 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506398 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506438 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506474 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506508 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506545 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506585 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506626 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506660 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506696 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506770 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506807 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506843 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506878 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506914 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506960 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506995 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507030 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507068 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507133 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507171 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507211 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507250 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507287 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507325 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507920 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507958 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507999 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508060 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508100 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508162 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508202 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508240 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508276 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508314 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508354 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508392 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508428 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508464 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508501 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508541 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508578 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508617 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508654 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508689 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508724 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508760 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508801 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508838 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508877 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508946 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508996 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509046 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509086 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509289 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509349 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509388 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509432 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509516 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509557 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509597 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509636 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509678 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509786 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509812 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509836 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509859 4873 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509891 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509914 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509937 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509960 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509982 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510004 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510029 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510051 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510074 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510095 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510157 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510186 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510210 4873 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510231 4873 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510256 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510278 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510298 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510321 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510344 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510365 4873 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510537 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.512423 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502169 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502510 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502522 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502721 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502783 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.502880 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503077 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503212 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503239 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503427 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503600 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.524766 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503663 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503826 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.503977 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504216 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504391 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504416 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504715 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.504724 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505001 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505293 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.505736 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506118 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506403 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506678 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.506739 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507288 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507380 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.525517 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.508272 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509029 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509577 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509753 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509850 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.509878 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510121 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510212 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510310 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510307 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510420 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510486 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.510831 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.511012 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.511122 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.511556 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.511596 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.511664 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.525975 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.516092 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.516360 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.517414 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.517944 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.518026 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.518074 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.518332 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.518350 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.519185 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.519322 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.519670 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.519772 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.519869 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:12.019829647 +0000 UTC m=+21.309261325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.519943 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520255 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520439 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520419 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520546 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520766 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520799 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.520826 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521012 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521122 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521428 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521534 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521588 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.521824 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522130 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522213 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522525 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522563 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522599 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522895 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.522945 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.523029 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.523448 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.523978 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.523976 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.524209 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.524567 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.524842 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.525286 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.507689 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.525773 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.525774 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.526657 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.528616 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.528948 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.529075 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.529732 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.530280 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.530300 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.530572 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.530978 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.531056 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.531252 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.531678 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.532274 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.532333 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.533899 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.534266 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.534634 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.534837 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.534919 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.535196 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535202 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.535197 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535296 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535415 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535692 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535754 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.535909 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.536624 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.536650 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.536724 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.536396 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537067 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537086 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537284 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.536793 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.537482 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:12.037441824 +0000 UTC m=+21.326873502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537486 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.537520 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:12.037502355 +0000 UTC m=+21.326934033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537665 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537672 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537722 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.537717 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538085 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538180 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538666 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538735 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538726 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.538783 4873 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.539079 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.539403 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.544824 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.546576 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.546599 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.546664 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.547597 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.549248 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.549898 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.549961 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.550041 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.550906 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.551000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.551383 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.554765 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.554878 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.555898 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.556215 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.556768 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.559361 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.559824 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.559937 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.560186 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.560696 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.563062 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.564389 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.564911 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.565187 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.565963 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.570187 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575161 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575195 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575207 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575279 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:12.075260903 +0000 UTC m=+21.364692541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575586 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575613 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575622 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.575661 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:12.075648662 +0000 UTC m=+21.365080300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.579530 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.580776 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.584187 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.584207 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.584562 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.585706 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.588966 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.589416 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.589702 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.590293 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.601923 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.607447 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.607519 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.608428 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.608860 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.610830 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" exitCode=255 Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.610895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90"} Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.610979 4873 scope.go:117] "RemoveContainer" containerID="70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.611585 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.611895 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.612423 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613171 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613371 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613369 4873 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613537 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613592 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613668 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.614782 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621446 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.613700 4873 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621571 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621585 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621597 4873 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621609 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621618 4873 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621628 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621640 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621653 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621685 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621700 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621715 4873 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621727 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621735 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621745 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621755 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621764 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621773 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621783 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621792 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621801 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621811 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621820 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621831 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621843 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621854 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621867 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621879 4873 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621891 4873 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621904 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621913 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621923 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621934 4873 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621944 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621953 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621963 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621972 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621981 4873 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621990 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.621999 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622007 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622018 4873 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622027 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622035 4873 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622045 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622054 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622063 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622072 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622081 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622090 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622115 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622126 4873 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622138 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622201 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622214 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622226 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622262 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622278 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622290 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622329 4873 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622342 4873 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622352 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622360 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622369 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622380 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622389 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622573 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622587 4873 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622597 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622608 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622617 4873 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622630 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622648 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622658 4873 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622667 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622676 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622685 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622694 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622703 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622712 4873 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622721 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622742 4873 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622751 4873 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622760 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622770 4873 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622781 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622790 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622799 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.622814 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623000 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623138 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623171 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623185 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623198 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623213 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623225 4873 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623234 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623231 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623244 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623314 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623327 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623339 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623350 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623360 4873 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623370 4873 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623380 4873 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623399 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623411 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623422 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623433 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623441 4873 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623451 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623459 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623469 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623478 4873 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623487 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623496 4873 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623506 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623515 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623524 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623532 4873 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623541 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623550 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623559 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623568 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623577 4873 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623586 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623595 4873 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: E0219 09:45:11.623413 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623627 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623639 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623714 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623725 4873 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623735 4873 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623745 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623754 4873 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623763 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623773 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623783 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623792 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623802 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623812 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623821 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623831 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623842 4873 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623854 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623864 4873 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623877 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623887 4873 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623898 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623908 4873 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623918 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623929 4873 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623938 4873 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623947 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623956 4873 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623965 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623975 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623987 4873 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.623996 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624006 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624015 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624025 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624037 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624046 4873 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624055 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624065 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.624141 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.633428 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.642469 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.651573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.661725 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.664092 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.671544 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.680728 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.691882 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.703861 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.706047 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.718937 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:05Z\\\",\\\"message\\\":\\\"W0219 09:44:54.615928 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:44:54.616259 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494294 cert, and key in /tmp/serving-cert-40864076/serving-signer.crt, /tmp/serving-cert-40864076/serving-signer.key\\\\nI0219 09:44:54.929626 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:44:54.937377 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:44:54.937554 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:44:54.940417 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-40864076/tls.crt::/tmp/serving-cert-40864076/tls.key\\\\\\\"\\\\nF0219 09:45:05.433822 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.721913 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.725477 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.725519 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.730902 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.736031 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:05Z\\\",\\\"message\\\":\\\"W0219 09:44:54.615928 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:44:54.616259 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494294 cert, and key in /tmp/serving-cert-40864076/serving-signer.crt, /tmp/serving-cert-40864076/serving-signer.key\\\\nI0219 09:44:54.929626 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:44:54.937377 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:44:54.937554 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:44:54.940417 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-40864076/tls.crt::/tmp/serving-cert-40864076/tls.key\\\\\\\"\\\\nF0219 09:45:05.433822 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: W0219 09:45:11.747432 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a2b37d751f9fecf06f003a04ebbc699170ec384c85835bd79e523ef75818a9ae WatchSource:0}: Error finding container a2b37d751f9fecf06f003a04ebbc699170ec384c85835bd79e523ef75818a9ae: Status 404 returned error can't find the container with id a2b37d751f9fecf06f003a04ebbc699170ec384c85835bd79e523ef75818a9ae Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.752565 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: W0219 09:45:11.759040 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-585f0209df108bb1488da3b7671de75e7f1820ba89e465e9ba6b44be7a60fd58 WatchSource:0}: Error finding container 585f0209df108bb1488da3b7671de75e7f1820ba89e465e9ba6b44be7a60fd58: Status 404 returned error can't find the container with id 585f0209df108bb1488da3b7671de75e7f1820ba89e465e9ba6b44be7a60fd58 Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.764643 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.777351 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.798013 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.812004 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:11 crc kubenswrapper[4873]: I0219 09:45:11.825009 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.030263 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.030438 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:13.030410462 +0000 UTC m=+22.319842110 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.131144 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.131199 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.131234 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.131261 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131391 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131428 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131464 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131497 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131514 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131479 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:13.13145459 +0000 UTC m=+22.420886248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131594 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:13.131568583 +0000 UTC m=+22.421000241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131612 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:13.131603574 +0000 UTC m=+22.421035342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131869 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131944 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.131963 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.132054 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:13.132035865 +0000 UTC m=+22.421467533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.429685 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:23:20.002372647 +0000 UTC Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.483739 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.483885 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.614775 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.614826 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7af00045211f8e3e88ed8a53b838d190c4150f96840141a108e844fe99fc0c39"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.617203 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.620093 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:12 crc kubenswrapper[4873]: E0219 09:45:12.620280 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.622330 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.622375 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.622390 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"585f0209df108bb1488da3b7671de75e7f1820ba89e465e9ba6b44be7a60fd58"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.623686 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a2b37d751f9fecf06f003a04ebbc699170ec384c85835bd79e523ef75818a9ae"} Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.624384 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.630542 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70fd060754ce5d2cd66259d212c3ff5be0347c059a51e1f6b9bda74e1ed507e7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:05Z\\\",\\\"message\\\":\\\"W0219 09:44:54.615928 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:44:54.616259 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494294 cert, and key in /tmp/serving-cert-40864076/serving-signer.crt, /tmp/serving-cert-40864076/serving-signer.key\\\\nI0219 09:44:54.929626 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:44:54.937377 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:44:54.937554 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:44:54.940417 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-40864076/tls.crt::/tmp/serving-cert-40864076/tls.key\\\\\\\"\\\\nF0219 09:45:05.433822 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.645769 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.656535 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.668889 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.687306 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.707251 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.727711 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.744143 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.764504 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.779920 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.802833 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.820900 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.840814 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:12 crc kubenswrapper[4873]: I0219 09:45:12.862560 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:12Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.040362 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.040629 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:15.04059287 +0000 UTC m=+24.330024538 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.141864 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.141939 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.141984 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.142072 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142080 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142245 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142276 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142323 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:15.142300235 +0000 UTC m=+24.431731883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142357 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:15.142334666 +0000 UTC m=+24.431766344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142433 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142478 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142503 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142718 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:15.142594812 +0000 UTC m=+24.432026500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142750 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142783 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.142838 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:15.142823568 +0000 UTC m=+24.432255276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.430132 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:28:00.367320706 +0000 UTC Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.483831 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.483872 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.484002 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.484139 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.488618 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.489559 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.490824 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.491776 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.492634 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.493384 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.494205 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.495005 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.497660 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.499015 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.500302 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.501847 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.502925 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.504067 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.505367 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.506610 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.508906 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.509346 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.509886 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.510460 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.510929 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.511490 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.512813 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.513462 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.514263 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.515818 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.517631 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.518862 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.520265 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.521463 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.522702 4873 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.522918 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.525665 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.526398 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.526809 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.527894 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.528528 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.529010 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.529675 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.530335 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.530827 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.531421 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.532056 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.532672 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.533797 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.534481 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.535584 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.536425 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.537272 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.537704 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.538203 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.539074 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.539632 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.540496 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 09:45:13 crc kubenswrapper[4873]: I0219 09:45:13.629720 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:13 crc kubenswrapper[4873]: E0219 09:45:13.629869 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.322059 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.430301 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:35:12.029043828 +0000 UTC Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.483250 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:14 crc kubenswrapper[4873]: E0219 09:45:14.483434 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.633792 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:14 crc kubenswrapper[4873]: E0219 09:45:14.634061 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.884084 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.891497 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.897486 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.911192 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:14Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.937840 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:14Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.963574 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:14Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:14 crc kubenswrapper[4873]: I0219 09:45:14.981986 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:14Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.004488 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.027910 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.047734 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.055527 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.055814 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:19.055783996 +0000 UTC m=+28.345215674 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.068530 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.091343 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.111471 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.156897 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.156989 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.157038 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.157151 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157282 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157321 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157341 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157376 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157416 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:19.157390119 +0000 UTC m=+28.446821797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157443 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157468 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:19.15743789 +0000 UTC m=+28.446869538 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157305 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157634 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:19.157601084 +0000 UTC m=+28.447032732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157679 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157712 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.157830 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:19.157777898 +0000 UTC m=+28.447209676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.158616 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.177209 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.198870 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.223003 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.243517 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.430593 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:07:21.002602878 +0000 UTC Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.483734 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.483835 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.483918 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.484052 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.636965 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab"} Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.638317 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.638623 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 09:45:15 crc kubenswrapper[4873]: E0219 09:45:15.645720 4873 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.658498 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.682574 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.704997 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.726049 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.743181 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.758458 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.774784 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:15 crc kubenswrapper[4873]: I0219 09:45:15.793285 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.431340 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:12:56.967949963 +0000 UTC Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.483405 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.483581 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.886661 4873 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.888783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.888840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.888851 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.888924 4873 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.894114 4873 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.894474 4873 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.895549 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.895603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.895619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.895641 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.895657 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:16Z","lastTransitionTime":"2026-02-19T09:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.913449 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.917093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.917173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.917189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.917211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.917229 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:16Z","lastTransitionTime":"2026-02-19T09:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.930976 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.935620 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.935668 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.935683 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.935704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.935719 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:16Z","lastTransitionTime":"2026-02-19T09:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.947752 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.951691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.951732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.951748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.951769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.951790 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:16Z","lastTransitionTime":"2026-02-19T09:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.965929 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.969895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.970062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.970201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.970351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:16 crc kubenswrapper[4873]: I0219 09:45:16.970473 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:16Z","lastTransitionTime":"2026-02-19T09:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.998371 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:16 crc kubenswrapper[4873]: E0219 09:45:16.998764 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.000487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.000545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.000563 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.000592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.000610 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.091671 4873 csr.go:261] certificate signing request csr-ldd8c is approved, waiting to be issued Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.103515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.103559 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.103572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.103592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.103610 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.112274 4873 csr.go:257] certificate signing request csr-ldd8c is issued Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.205408 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.205442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.205451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.205465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.205474 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.307743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.307791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.307804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.307824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.307837 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.410395 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.410440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.410451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.410467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.410480 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.431552 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:09:57.267865004 +0000 UTC Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.483209 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.483264 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:17 crc kubenswrapper[4873]: E0219 09:45:17.483358 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:17 crc kubenswrapper[4873]: E0219 09:45:17.483442 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.512615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.512660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.512672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.512691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.512704 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.554083 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4pk8x"] Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.554420 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.555408 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pp77w"] Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.555721 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.556707 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.556705 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.556831 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.556851 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.557231 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.557928 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.557980 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.558404 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.568850 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580683 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-system-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580715 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580730 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-hostroot\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580757 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-k8s-cni-cncf-io\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580771 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-conf-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-daemon-config\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580798 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjnw\" (UniqueName: \"kubernetes.io/projected/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-kube-api-access-vnjnw\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580812 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmt6g\" (UniqueName: \"kubernetes.io/projected/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-kube-api-access-dmt6g\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580825 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-socket-dir-parent\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580841 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-netns\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580855 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cnibin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580869 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-kubelet\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580884 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-etc-kubernetes\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580903 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cni-binary-copy\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580917 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-bin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580930 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-multus\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580943 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-multus-certs\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580971 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-hosts-file\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.580992 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-os-release\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.590007 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.609752 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.614417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.614584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.614656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.614726 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.614802 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.621274 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.632067 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.647737 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.658969 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.670142 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.681879 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cnibin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682072 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cnibin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682081 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-kubelet\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682181 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-etc-kubernetes\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682225 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-multus\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682259 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-multus-certs\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682279 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-etc-kubernetes\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682358 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-multus\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682296 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cni-binary-copy\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682414 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-bin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682426 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-multus-certs\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682456 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-hosts-file\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682482 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-os-release\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682518 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-system-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682520 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-cni-bin\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682531 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-hosts-file\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682539 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682595 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-hostroot\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682597 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-os-release\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682626 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-k8s-cni-cncf-io\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682648 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-conf-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682657 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-hostroot\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682670 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-daemon-config\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682676 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-system-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682702 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-cni-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682709 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-k8s-cni-cncf-io\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682714 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnjnw\" (UniqueName: \"kubernetes.io/projected/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-kube-api-access-vnjnw\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682757 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmt6g\" (UniqueName: \"kubernetes.io/projected/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-kube-api-access-dmt6g\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682693 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-conf-dir\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682841 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-netns\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682868 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-socket-dir-parent\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682906 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-run-netns\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.682936 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-socket-dir-parent\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.683044 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-cni-binary-copy\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.683412 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-host-var-lib-kubelet\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.683470 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-multus-daemon-config\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.687343 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.698297 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.713459 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnjnw\" (UniqueName: \"kubernetes.io/projected/e1ae3d8d-27cf-489f-a6ba-ef914db74bff-kube-api-access-vnjnw\") pod \"multus-4pk8x\" (UID: \"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\") " pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.716758 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmt6g\" (UniqueName: \"kubernetes.io/projected/d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea-kube-api-access-dmt6g\") pod \"node-resolver-pp77w\" (UID: \"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\") " pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.722091 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.722287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.722377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.722446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.722500 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.729818 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.743573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.757651 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.775983 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.791514 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.813611 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.825877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.825920 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.825930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.825947 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.825957 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.836632 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.859435 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.865185 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4pk8x" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.871256 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pp77w" Feb 19 09:45:17 crc kubenswrapper[4873]: W0219 09:45:17.879898 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1ae3d8d_27cf_489f_a6ba_ef914db74bff.slice/crio-5a187ab1c980a5e2fb5e0b037e5e6f00a42109df3c8de8358ae4d423f9a958e6 WatchSource:0}: Error finding container 5a187ab1c980a5e2fb5e0b037e5e6f00a42109df3c8de8358ae4d423f9a958e6: Status 404 returned error can't find the container with id 5a187ab1c980a5e2fb5e0b037e5e6f00a42109df3c8de8358ae4d423f9a958e6 Feb 19 09:45:17 crc kubenswrapper[4873]: W0219 09:45:17.885160 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8bbad50_17a6_49b3_aa6a_3d8bcf05f5ea.slice/crio-905da86833ed88c53cfd0279655c029c499ad0bee8a36d9732d486921b652bf5 WatchSource:0}: Error finding container 905da86833ed88c53cfd0279655c029c499ad0bee8a36d9732d486921b652bf5: Status 404 returned error can't find the container with id 905da86833ed88c53cfd0279655c029c499ad0bee8a36d9732d486921b652bf5 Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.886483 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.930611 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.930801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.930886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.930950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.931013 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:17Z","lastTransitionTime":"2026-02-19T09:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.976253 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n2lwn"] Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.976728 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qmsl7"] Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.976843 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.977179 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979004 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979346 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979460 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979552 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979717 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979909 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.979980 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 09:45:17 crc kubenswrapper[4873]: I0219 09:45:17.991028 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.003035 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.014212 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.025389 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.033135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.033171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.033180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.033194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.033203 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.038197 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.060059 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.075153 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.086937 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c61760e-2955-4688-b68b-1ceeda73f356-mcd-auth-proxy-config\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.086969 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.086985 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-system-cni-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087008 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087183 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmwg\" (UniqueName: \"kubernetes.io/projected/8c61760e-2955-4688-b68b-1ceeda73f356-kube-api-access-fgmwg\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087235 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6v4\" (UniqueName: \"kubernetes.io/projected/acb9409d-e5b1-4d32-9200-8dc32d8923d2-kube-api-access-gr6v4\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087285 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-os-release\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087309 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cnibin\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087366 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087416 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c61760e-2955-4688-b68b-1ceeda73f356-proxy-tls\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.087455 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c61760e-2955-4688-b68b-1ceeda73f356-rootfs\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.101119 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.115035 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 09:40:17 +0000 UTC, rotation deadline is 2026-11-13 18:55:06.537818787 +0000 UTC Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.115121 4873 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6417h9m48.422702213s for next certificate rotation Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.115048 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.125960 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.133683 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.135278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.135313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.135323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.135365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.135376 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.144084 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.153395 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.164570 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.175652 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.186374 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188659 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c61760e-2955-4688-b68b-1ceeda73f356-rootfs\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188747 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c61760e-2955-4688-b68b-1ceeda73f356-mcd-auth-proxy-config\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188773 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8c61760e-2955-4688-b68b-1ceeda73f356-rootfs\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188831 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188946 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-system-cni-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.188983 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189019 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-system-cni-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189044 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmwg\" (UniqueName: \"kubernetes.io/projected/8c61760e-2955-4688-b68b-1ceeda73f356-kube-api-access-fgmwg\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189066 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6v4\" (UniqueName: \"kubernetes.io/projected/acb9409d-e5b1-4d32-9200-8dc32d8923d2-kube-api-access-gr6v4\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189120 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-os-release\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189144 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cnibin\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189170 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189184 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c61760e-2955-4688-b68b-1ceeda73f356-proxy-tls\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189369 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cnibin\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189832 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.189396 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/acb9409d-e5b1-4d32-9200-8dc32d8923d2-os-release\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.190311 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.190458 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c61760e-2955-4688-b68b-1ceeda73f356-mcd-auth-proxy-config\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.190410 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/acb9409d-e5b1-4d32-9200-8dc32d8923d2-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.193895 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c61760e-2955-4688-b68b-1ceeda73f356-proxy-tls\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.201576 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.207713 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmwg\" (UniqueName: \"kubernetes.io/projected/8c61760e-2955-4688-b68b-1ceeda73f356-kube-api-access-fgmwg\") pod \"machine-config-daemon-qmsl7\" (UID: \"8c61760e-2955-4688-b68b-1ceeda73f356\") " pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.210990 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6v4\" (UniqueName: \"kubernetes.io/projected/acb9409d-e5b1-4d32-9200-8dc32d8923d2-kube-api-access-gr6v4\") pod \"multus-additional-cni-plugins-n2lwn\" (UID: \"acb9409d-e5b1-4d32-9200-8dc32d8923d2\") " pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.216783 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.227474 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.237451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.237486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.237494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.237507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.237517 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.239302 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.247833 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.259125 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.272422 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.289687 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.294859 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:45:18 crc kubenswrapper[4873]: W0219 09:45:18.309403 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacb9409d_e5b1_4d32_9200_8dc32d8923d2.slice/crio-a59bc3455f270192605d5ae9a48a36c898f190bbee88506712627bced73d7bbb WatchSource:0}: Error finding container a59bc3455f270192605d5ae9a48a36c898f190bbee88506712627bced73d7bbb: Status 404 returned error can't find the container with id a59bc3455f270192605d5ae9a48a36c898f190bbee88506712627bced73d7bbb Feb 19 09:45:18 crc kubenswrapper[4873]: W0219 09:45:18.311794 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c61760e_2955_4688_b68b_1ceeda73f356.slice/crio-80c8293e37eae8d5fd3b9053b4e8e6a37966234aa3490061d14d879f58dacde0 WatchSource:0}: Error finding container 80c8293e37eae8d5fd3b9053b4e8e6a37966234aa3490061d14d879f58dacde0: Status 404 returned error can't find the container with id 80c8293e37eae8d5fd3b9053b4e8e6a37966234aa3490061d14d879f58dacde0 Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.340084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.340217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.340237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.340259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.340276 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.360834 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j94bh"] Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.362371 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.367424 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.367660 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.367791 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.367829 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.368402 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.368557 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.368754 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.381421 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390510 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390548 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390572 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390593 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390613 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390642 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390661 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390685 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390706 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390751 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390842 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390945 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.390996 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.391029 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.391049 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7vl\" (UniqueName: \"kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.391072 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.391140 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.391166 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.394561 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.409869 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.426780 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.432508 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:25:22.22645834 +0000 UTC Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.441961 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.444773 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.444811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.444822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.444865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.444879 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.457534 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.469221 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.482573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.483801 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:18 crc kubenswrapper[4873]: E0219 09:45:18.483913 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492069 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492169 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492199 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492247 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492292 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492330 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492380 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492405 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492427 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492452 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492484 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492529 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492555 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492590 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492664 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7vl\" (UniqueName: \"kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492690 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492714 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492739 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492866 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492957 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.492998 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.493031 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494191 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494284 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494331 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494372 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494396 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494432 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494532 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.494999 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.495068 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.495134 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.496299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.497229 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.497184 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.516116 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.521772 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.527983 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7vl\" (UniqueName: \"kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl\") pod \"ovnkube-node-j94bh\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.530503 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.544755 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.546930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.546971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.546983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.547000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.547012 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.558947 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.656865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.656929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.656945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.656968 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.656989 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.658183 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.658245 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.658257 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"80c8293e37eae8d5fd3b9053b4e8e6a37966234aa3490061d14d879f58dacde0"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.659328 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pp77w" event={"ID":"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea","Type":"ContainerStarted","Data":"28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.659376 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pp77w" event={"ID":"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea","Type":"ContainerStarted","Data":"905da86833ed88c53cfd0279655c029c499ad0bee8a36d9732d486921b652bf5"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.661557 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerStarted","Data":"844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.661586 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerStarted","Data":"a59bc3455f270192605d5ae9a48a36c898f190bbee88506712627bced73d7bbb"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.662965 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerStarted","Data":"6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.663023 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerStarted","Data":"5a187ab1c980a5e2fb5e0b037e5e6f00a42109df3c8de8358ae4d423f9a958e6"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.673779 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.686536 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.697752 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.699942 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.715457 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: W0219 09:45:18.720799 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7760a15_9ea0_42f0_b42b_72de30071d14.slice/crio-542002b7bbc20e4e4f7ed68e13539e1b5d49a0679ef11d6b86cc15c762bc318b WatchSource:0}: Error finding container 542002b7bbc20e4e4f7ed68e13539e1b5d49a0679ef11d6b86cc15c762bc318b: Status 404 returned error can't find the container with id 542002b7bbc20e4e4f7ed68e13539e1b5d49a0679ef11d6b86cc15c762bc318b Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.733593 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.751077 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.762476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.762525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.762539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.762557 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.762567 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.767498 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.784838 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.804688 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.823865 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.840732 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.861940 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.864339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.864376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.864386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.864402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.864413 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.883820 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.894797 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.911541 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.950022 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.967398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.967461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.967479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.967503 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.967520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:18Z","lastTransitionTime":"2026-02-19T09:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:18 crc kubenswrapper[4873]: I0219 09:45:18.985639 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.024369 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.069319 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.069362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.069371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.069385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.069396 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.071733 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.099785 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.099958 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:27.099931042 +0000 UTC m=+36.389362680 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.105091 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.146413 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.171831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.172070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.172201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.172327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.172415 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.198180 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.201488 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.201730 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.201838 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.201935 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.201623 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.201839 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202201 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.201883 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202242 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202256 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202300 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:27.202284913 +0000 UTC m=+36.491716551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202024 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202330 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:27.202323814 +0000 UTC m=+36.491755452 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202225 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202357 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:27.202352645 +0000 UTC m=+36.491784283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.202599 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:27.20257682 +0000 UTC m=+36.492008468 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.233693 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.263894 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.276740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.276774 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.276783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.276801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.276810 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.307869 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.349869 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.379720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.379760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.379773 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.379796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.379809 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.432769 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:11:37.787616531 +0000 UTC Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.482036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.482366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.482379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.482394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.482403 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.483161 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.483260 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.483609 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:19 crc kubenswrapper[4873]: E0219 09:45:19.483674 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.503989 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kbv7k"] Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.504340 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.506004 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.506160 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.506050 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.506891 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.517765 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.530092 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.548289 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.585228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.585455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.585556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.585643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.585718 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.590360 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.605641 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33fdff17-cdda-468e-8520-7f0937acd8db-host\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.605687 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gql7d\" (UniqueName: \"kubernetes.io/projected/33fdff17-cdda-468e-8520-7f0937acd8db-kube-api-access-gql7d\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.605855 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33fdff17-cdda-468e-8520-7f0937acd8db-serviceca\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.626762 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.667874 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795" exitCode=0 Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.667981 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.670665 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b" exitCode=0 Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.670762 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.670848 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"542002b7bbc20e4e4f7ed68e13539e1b5d49a0679ef11d6b86cc15c762bc318b"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.683225 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.689349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.689399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.689414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.689432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.689444 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.707062 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33fdff17-cdda-468e-8520-7f0937acd8db-host\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.707194 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gql7d\" (UniqueName: \"kubernetes.io/projected/33fdff17-cdda-468e-8520-7f0937acd8db-kube-api-access-gql7d\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.707481 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33fdff17-cdda-468e-8520-7f0937acd8db-serviceca\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.707626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33fdff17-cdda-468e-8520-7f0937acd8db-host\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.712278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33fdff17-cdda-468e-8520-7f0937acd8db-serviceca\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.721224 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.741031 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gql7d\" (UniqueName: \"kubernetes.io/projected/33fdff17-cdda-468e-8520-7f0937acd8db-kube-api-access-gql7d\") pod \"node-ca-kbv7k\" (UID: \"33fdff17-cdda-468e-8520-7f0937acd8db\") " pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.766494 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797706 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797718 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.797936 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kbv7k" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.809241 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: W0219 09:45:19.814386 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fdff17_cdda_468e_8520_7f0937acd8db.slice/crio-4958538e56e82b562971372acc5abe128e2607afeb4e6289824a3e581a5b5f31 WatchSource:0}: Error finding container 4958538e56e82b562971372acc5abe128e2607afeb4e6289824a3e581a5b5f31: Status 404 returned error can't find the container with id 4958538e56e82b562971372acc5abe128e2607afeb4e6289824a3e581a5b5f31 Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.845564 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.887463 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.907136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.908373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.908392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.908436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.908447 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:19Z","lastTransitionTime":"2026-02-19T09:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.929066 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:19 crc kubenswrapper[4873]: I0219 09:45:19.968391 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.008998 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.012554 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.012584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.012613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.012630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.012639 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.045331 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.084208 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.114589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.114632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.114643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.114659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.114670 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.128210 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.168122 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.206861 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.216578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.216617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.216626 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.216640 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.216651 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.245059 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.287163 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.318808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.318848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.318857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.318870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.318881 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.332673 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.365584 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.407583 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.422014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.422061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.422072 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.422088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.422121 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.434259 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:10:32.494306312 +0000 UTC Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.445806 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.484087 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:20 crc kubenswrapper[4873]: E0219 09:45:20.484240 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.485194 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.524026 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.524060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.524068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.524088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.524098 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.533876 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.567702 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.625945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.625984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.625995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.626010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.626020 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.676456 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kbv7k" event={"ID":"33fdff17-cdda-468e-8520-7f0937acd8db","Type":"ContainerStarted","Data":"e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.676525 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kbv7k" event={"ID":"33fdff17-cdda-468e-8520-7f0937acd8db","Type":"ContainerStarted","Data":"4958538e56e82b562971372acc5abe128e2607afeb4e6289824a3e581a5b5f31"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.678251 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505" exitCode=0 Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.678333 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690010 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690087 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690132 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690149 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.690164 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.701592 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.720206 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.729156 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.729196 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.729213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.729234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.729249 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.740168 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.769248 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.784597 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.808648 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.832386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.832431 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.832452 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.832477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.832494 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.848094 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.894683 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.926197 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.935721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.935771 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.935788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.935812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.935832 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:20Z","lastTransitionTime":"2026-02-19T09:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:20 crc kubenswrapper[4873]: I0219 09:45:20.964705 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.026265 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.039337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.039377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.039387 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.039401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.039411 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.066742 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.092029 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.129027 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.141365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.141424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.141442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.141466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.141483 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.166949 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.175310 4873 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175488 4873 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175493 4873 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175541 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175585 4873 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175612 4873 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175637 4873 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175654 4873 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175667 4873 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175679 4873 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175542 4873 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175630 4873 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175554 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175554 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175583 4873 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175583 4873 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175579 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175582 4873 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175760 4873 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175618 4873 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175770 4873 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175800 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175624 4873 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175818 4873 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175830 4873 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175623 4873 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175849 4873 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175725 4873 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175868 4873 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175772 4873 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175830 4873 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175895 4873 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175834 4873 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175912 4873 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175939 4873 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175961 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175971 4873 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.175963 4873 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.176002 4873 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.176009 4873 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: W0219 09:45:21.176016 4873 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection force closed via ClientConn.Close") has prevented the request from succeeding Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.244352 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.244383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.244392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.244405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.244414 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.346761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.346807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.346823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.346843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.346858 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.435171 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:08:25.03492548 +0000 UTC Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.449203 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.449252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.449265 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.449281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.449292 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.483743 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:21 crc kubenswrapper[4873]: E0219 09:45:21.483888 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.484020 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:21 crc kubenswrapper[4873]: E0219 09:45:21.484181 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.551860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.551893 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.551900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.551914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.551924 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.654163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.654240 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.654264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.654294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.654317 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.699151 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189" exitCode=0 Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.699217 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.757483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.757531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.757543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.757561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.757577 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.861293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.861362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.861383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.861409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.861428 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.964489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.965010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.965031 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.965057 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:21 crc kubenswrapper[4873]: I0219 09:45:21.965076 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:21Z","lastTransitionTime":"2026-02-19T09:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.030323 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.037834 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.038939 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.049847 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.065834 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.068008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.068046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.068058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.068074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.068085 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.089479 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.093994 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.171237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.171277 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.171286 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.171301 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.171310 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.191989 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.204479 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.214305 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.221024 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.233391 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.235908 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.244061 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.254592 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.256087 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.273477 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.275412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.275446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.275460 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.275476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.275488 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.285831 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.293686 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.316580 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.326705 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.339346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.353965 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.370282 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.378483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.378548 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.378568 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.378592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.378611 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.395759 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.397148 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.402445 4873 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.410929 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.423921 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.428018 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.436404 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:29:46.167120903 +0000 UTC Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.437688 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.445252 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.449678 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.461932 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.475042 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.476286 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.481155 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.483229 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:22 crc kubenswrapper[4873]: E0219 09:45:22.483365 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.484654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.484733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.484760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.484812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.484833 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.500303 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.519475 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.522866 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.524439 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.539510 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.549621 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.564943 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.565161 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.576852 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.577209 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.589708 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.589748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.589776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.589791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.589800 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.597091 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.637009 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.657451 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.678264 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.692077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.692155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.692172 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.692196 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.692215 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.705243 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a" exitCode=0 Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.705302 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.712060 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.712587 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.717441 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.736891 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.756747 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.794413 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.794451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.794463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.794478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.794490 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.796484 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.817046 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.836716 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.856379 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.876382 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.896758 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.896790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.896804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.896820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.896832 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.903151 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.976915 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.990375 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:22Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.999509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.999549 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.999563 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.999583 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:22 crc kubenswrapper[4873]: I0219 09:45:22.999598 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:22Z","lastTransitionTime":"2026-02-19T09:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.030545 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.064587 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.102561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.102616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.102632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.102653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.102668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.105923 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.149670 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.187034 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.206475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.206547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.206567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.206592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.206612 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.229668 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.264450 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311147 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.311509 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.349461 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.384568 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.413731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.413769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.413783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.413799 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.413810 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.433287 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.436593 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:40:55.084003919 +0000 UTC Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.465429 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.485904 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:23 crc kubenswrapper[4873]: E0219 09:45:23.486025 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.486431 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:23 crc kubenswrapper[4873]: E0219 09:45:23.486494 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.508868 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.518340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.518387 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.518397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.518415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.518425 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.551663 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.596414 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.625139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.625187 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.625201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.625217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.625229 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.627482 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.672582 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.708538 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.719408 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6" exitCode=0 Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.719467 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.726933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.726962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.726973 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.727005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.727016 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.750182 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.792073 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.828958 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.829905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.829951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.829966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.829986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.830001 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.863740 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.904867 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.932452 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.932491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.932505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.932524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.932537 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:23Z","lastTransitionTime":"2026-02-19T09:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.957241 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:23 crc kubenswrapper[4873]: I0219 09:45:23.983824 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.030402 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.035381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.035404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.035413 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.035425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.035434 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.064937 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.105510 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.139991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.140046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.140062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.140087 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.140132 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.150870 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.191230 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.234771 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.243143 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.243201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.243224 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.243253 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.243275 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.270703 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.309167 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.351563 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.351640 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.351659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.351684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.352366 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.367137 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.386153 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.436511 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.437316 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:58:19.131212465 +0000 UTC Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.456993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.457033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.457043 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.457056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.457066 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.472954 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.483315 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:24 crc kubenswrapper[4873]: E0219 09:45:24.483558 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.513460 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.552380 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.560327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.560391 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.560416 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.560444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.560466 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.589516 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.636781 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.663703 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.663755 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.663772 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.663820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.663836 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.669841 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.729624 4873 generic.go:334] "Generic (PLEG): container finished" podID="acb9409d-e5b1-4d32-9200-8dc32d8923d2" containerID="5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39" exitCode=0 Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.729704 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerDied","Data":"5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.747031 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.760635 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.767278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.767342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.767365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.767396 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.767419 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.787529 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.827024 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.866795 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.869234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.869264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.869274 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.869288 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.869298 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.905376 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.944159 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.972016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.972063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.972080 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.972127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.972146 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:24Z","lastTransitionTime":"2026-02-19T09:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:24 crc kubenswrapper[4873]: I0219 09:45:24.984144 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:24Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.036245 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.071132 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.074601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.074671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.074702 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.074746 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.074786 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.104425 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.144454 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.177358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.177400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.177412 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.177430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.177442 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.189338 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.224443 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.279236 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.279275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.279287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.279302 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.279315 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.383182 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.383237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.383258 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.383283 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.383302 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.438013 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:40:11.372454571 +0000 UTC Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.484004 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.484040 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:25 crc kubenswrapper[4873]: E0219 09:45:25.484250 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:25 crc kubenswrapper[4873]: E0219 09:45:25.484433 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.491054 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.491148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.491177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.491226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.491298 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.594345 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.594397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.594411 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.594430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.594443 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.697680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.697728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.697743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.697763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.697776 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.735973 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" event={"ID":"acb9409d-e5b1-4d32-9200-8dc32d8923d2","Type":"ContainerStarted","Data":"566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.744781 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.745815 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.745917 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.752137 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.764830 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.770320 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.770425 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.779310 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.791420 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.800480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.800519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.800530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.800546 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.800556 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.804612 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.817392 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.828179 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.839565 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.852722 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.869773 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.880679 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.898141 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.903540 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.903598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.903616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.903638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.903655 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:25Z","lastTransitionTime":"2026-02-19T09:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.909289 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.923034 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.936091 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.949980 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.965001 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.982333 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:25 crc kubenswrapper[4873]: I0219 09:45:25.995378 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.006962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.007017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.007035 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.007058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.007076 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.012403 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.031668 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.078468 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.104937 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.109585 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.109638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.109652 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.109697 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.109710 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.156419 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.185089 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.212496 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.212547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.212606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.212633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.212650 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.230346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.269866 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.304746 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.315529 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.315595 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.315613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.315638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.315658 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.347639 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:26Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.419247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.419393 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.419419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.419449 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.419475 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.438564 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:34:17.44741874 +0000 UTC Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.484075 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:26 crc kubenswrapper[4873]: E0219 09:45:26.484567 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.522181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.522272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.522296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.522321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.522339 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.625476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.625533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.625551 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.625575 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.625592 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.728379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.728456 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.728474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.728506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.728529 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.831500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.831885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.832019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.832202 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.832339 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.935724 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.935791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.935816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.935848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:26 crc kubenswrapper[4873]: I0219 09:45:26.935870 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:26Z","lastTransitionTime":"2026-02-19T09:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.037959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.038011 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.038037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.038051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.038059 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.110222 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.110440 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:43.110413012 +0000 UTC m=+52.399844670 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.111983 4873 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.139991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.140025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.140034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.140047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.140055 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.163783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.163834 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.163844 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.163875 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.163885 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.176412 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:27Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.179313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.179334 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.179344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.179357 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.179366 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.194984 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:27Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.198494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.198547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.198560 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.198576 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.198590 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.210641 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:27Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.210857 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.210909 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.210943 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.210970 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211034 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211042 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211074 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211093 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211123 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211074 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211150 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211161 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211082 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:43.211068951 +0000 UTC m=+52.500500589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211209 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:43.211197094 +0000 UTC m=+52.500628732 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211226 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:43.211218914 +0000 UTC m=+52.500650552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.211241 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:43.211233295 +0000 UTC m=+52.500664933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.213977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.214074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.214177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.214263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.214343 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.226409 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:27Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.230181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.230327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.230434 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.230559 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.230652 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.242552 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:27Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.242877 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.244570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.244672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.244752 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.244842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.244914 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.347803 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.348038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.348135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.348230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.348337 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.355847 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.438801 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:57:57.499660517 +0000 UTC Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.452598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.452862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.452894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.452926 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.452962 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.483650 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.483767 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.483816 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:27 crc kubenswrapper[4873]: E0219 09:45:27.484076 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.556302 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.556355 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.556374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.556396 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.556411 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.659127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.659189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.659207 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.659232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.659249 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.761181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.761211 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.761219 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.761231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.761239 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.864113 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.864341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.864409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.864502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.864567 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.968341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.968643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.968818 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.969025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:27 crc kubenswrapper[4873]: I0219 09:45:27.969446 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:27Z","lastTransitionTime":"2026-02-19T09:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.072996 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.073063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.073084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.073137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.073160 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.175453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.175497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.175507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.175522 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.175534 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.278150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.278213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.278231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.278259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.278278 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.380603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.380645 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.380661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.380681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.380697 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.440353 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:40:47.567049124 +0000 UTC Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.483842 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:28 crc kubenswrapper[4873]: E0219 09:45:28.483994 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.484420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.484451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.484468 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.484489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.484505 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.587668 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.587766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.587784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.587843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.587861 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.626408 4873 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.691733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.691833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.691872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.691940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.691981 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.757687 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/0.log" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.761862 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019" exitCode=1 Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.761927 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.763434 4873 scope.go:117] "RemoveContainer" containerID="2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.780741 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795061 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795492 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.795520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.810820 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.828809 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.839889 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.855228 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.883371 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.897836 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.897883 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.897904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.897932 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.897957 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:28Z","lastTransitionTime":"2026-02-19T09:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.899718 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.927992 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.951897 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.970365 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.984641 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:28 crc kubenswrapper[4873]: I0219 09:45:28.997382 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:28Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.000724 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.000775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.000804 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.000826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.000841 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.011452 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.103907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.104331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.104371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.104485 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.104511 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.207818 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.207873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.207886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.207905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.207918 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.310377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.310420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.310437 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.310460 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.310478 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.413526 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.413587 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.413607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.413631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.413647 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.441495 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:08:59.738202622 +0000 UTC Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.483617 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.483684 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:29 crc kubenswrapper[4873]: E0219 09:45:29.484061 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:29 crc kubenswrapper[4873]: E0219 09:45:29.484291 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.484454 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.520144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.520217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.520239 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.520275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.520299 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.622483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.622515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.622523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.622536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.622546 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.724885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.724916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.724925 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.724939 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.724949 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.766427 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.768144 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.768602 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.770539 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/0.log" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.773243 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.773808 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.785929 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.800326 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.811166 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.826890 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.827658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.827694 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.827707 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.827723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.827733 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.843480 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.855344 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.869980 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.887578 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.900000 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.913941 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.927648 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.930069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.930100 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.930137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.930159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.930172 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:29Z","lastTransitionTime":"2026-02-19T09:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.943485 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.961071 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:29 crc kubenswrapper[4873]: I0219 09:45:29.980795 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:29Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.011227 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.032749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.032791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.032801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.032816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.032825 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.037825 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.053195 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.066281 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.076344 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.087737 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.097859 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.109370 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.125922 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.135399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.135427 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.135452 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.135466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.135475 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.139559 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.148972 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.161718 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.192393 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.205866 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.237908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.237962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.237981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.238007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.238024 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.340538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.340588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.340601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.340620 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.340632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.385859 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb"] Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.386305 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.388092 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.389888 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.407945 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.426244 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.441685 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:08:57.28899949 +0000 UTC Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.443972 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.444014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.444026 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.444041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.444055 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.446973 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.475885 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.484135 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:30 crc kubenswrapper[4873]: E0219 09:45:30.484309 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.491317 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.505627 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.518470 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.534041 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.546899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.546942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.546956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.546972 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.546982 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.547669 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/566305a3-ea47-4e60-b247-5b32fa8544e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.547824 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.547921 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvr5\" (UniqueName: \"kubernetes.io/projected/566305a3-ea47-4e60-b247-5b32fa8544e2-kube-api-access-gpvr5\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.548038 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.549009 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.560739 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.575172 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.589864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.606005 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.619594 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.635197 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.648537 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/566305a3-ea47-4e60-b247-5b32fa8544e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.648614 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.648660 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvr5\" (UniqueName: \"kubernetes.io/projected/566305a3-ea47-4e60-b247-5b32fa8544e2-kube-api-access-gpvr5\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.648715 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650458 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.650670 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.651483 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/566305a3-ea47-4e60-b247-5b32fa8544e2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.655637 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/566305a3-ea47-4e60-b247-5b32fa8544e2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.680518 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvr5\" (UniqueName: \"kubernetes.io/projected/566305a3-ea47-4e60-b247-5b32fa8544e2-kube-api-access-gpvr5\") pod \"ovnkube-control-plane-749d76644c-t7gjb\" (UID: \"566305a3-ea47-4e60-b247-5b32fa8544e2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.705522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.753325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.753360 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.753372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.753388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.753399 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.775860 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/1.log" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.776703 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/0.log" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.778676 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3" exitCode=1 Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.778717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.778748 4873 scope.go:117] "RemoveContainer" containerID="2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.779376 4873 scope.go:117] "RemoveContainer" containerID="37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3" Feb 19 09:45:30 crc kubenswrapper[4873]: E0219 09:45:30.779500 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.783334 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" event={"ID":"566305a3-ea47-4e60-b247-5b32fa8544e2","Type":"ContainerStarted","Data":"3e9f8c301ee914f3296e8a090b69b2915883c93b034aab6deaa4dde2271ff8b7"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.793374 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.807596 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.817780 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.827018 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.842046 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855594 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855791 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.855177 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.869767 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.882574 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.897336 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.910374 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.924210 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.942856 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.952620 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.958760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.958790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.958799 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.958816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.958824 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:30Z","lastTransitionTime":"2026-02-19T09:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.966921 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:30 crc kubenswrapper[4873]: I0219 09:45:30.977484 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:30Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.061547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.061593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.061610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.061630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.061643 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.164629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.164694 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.164712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.164736 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.164755 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.267720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.267756 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.267766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.267781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.267790 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.370019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.370095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.370148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.370173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.370199 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.442404 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:50:20.332249366 +0000 UTC Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.472437 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.472475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.472484 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.472498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.472507 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.483874 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.483971 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:31 crc kubenswrapper[4873]: E0219 09:45:31.484029 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:31 crc kubenswrapper[4873]: E0219 09:45:31.484518 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.505867 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.517690 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.534204 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.550433 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.566717 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.574909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.574979 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.575004 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.575059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.575086 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.587745 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.609983 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.621971 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.638925 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.654029 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.665528 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.678483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.678737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.678890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.679046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.679232 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.694262 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.710536 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.732776 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.749377 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.781917 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.781954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.781965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.781980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.781990 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.786966 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" event={"ID":"566305a3-ea47-4e60-b247-5b32fa8544e2","Type":"ContainerStarted","Data":"5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.787035 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" event={"ID":"566305a3-ea47-4e60-b247-5b32fa8544e2","Type":"ContainerStarted","Data":"96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.788899 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/1.log" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.794765 4873 scope.go:117] "RemoveContainer" containerID="37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3" Feb 19 09:45:31 crc kubenswrapper[4873]: E0219 09:45:31.795068 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.804168 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.816608 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.831811 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.846840 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.864881 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.877830 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lcp8k"] Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.878504 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:31 crc kubenswrapper[4873]: E0219 09:45:31.878617 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.882541 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.883618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.883691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.883715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.883744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.883767 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.897490 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.918123 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.936326 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.957931 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.963696 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvs9b\" (UniqueName: \"kubernetes.io/projected/98d35597-056d-48f0-b599-28b098dd45f3-kube-api-access-rvs9b\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.963878 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.978331 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fb8dc98187d774ae64314b294ea5d477d9205175171422f2ecb5e08a948f019\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:27Z\\\",\\\"message\\\":\\\"ce (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736581 6188 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736599 6188 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736654 6188 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736805 6188 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.736917 6188 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.737306 6188 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:45:27.737480 6188 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:27.737633 6188 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:45:27.738132 6188 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.986163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.986238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.986256 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.986281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.986298 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:31Z","lastTransitionTime":"2026-02-19T09:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:31 crc kubenswrapper[4873]: I0219 09:45:31.988996 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:31Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.002972 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.015649 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.027148 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.043051 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.061402 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.064835 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvs9b\" (UniqueName: \"kubernetes.io/projected/98d35597-056d-48f0-b599-28b098dd45f3-kube-api-access-rvs9b\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.064975 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:32 crc kubenswrapper[4873]: E0219 09:45:32.065173 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:32 crc kubenswrapper[4873]: E0219 09:45:32.065287 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:32.565255344 +0000 UTC m=+41.854687022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.080083 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.089612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.089651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.089661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.089678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.089690 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.092208 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvs9b\" (UniqueName: \"kubernetes.io/projected/98d35597-056d-48f0-b599-28b098dd45f3-kube-api-access-rvs9b\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.100573 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.119351 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.137390 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.155439 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.176480 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.191812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.191860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.191876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.191897 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.191912 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.194431 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.213371 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.242951 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.260007 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.280262 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.294990 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.295043 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.295058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.295078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.295094 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.297482 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.311637 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.324261 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:32Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.398321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.398380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.398397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.398421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.398437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.443540 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:26:39.153434125 +0000 UTC Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.483310 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:32 crc kubenswrapper[4873]: E0219 09:45:32.483807 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.500631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.500696 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.500717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.500742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.500761 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.552922 4873 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.571715 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:32 crc kubenswrapper[4873]: E0219 09:45:32.571900 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:32 crc kubenswrapper[4873]: E0219 09:45:32.571986 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:33.571967303 +0000 UTC m=+42.861398951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.603062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.603217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.603246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.603317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.603345 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.706341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.706404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.706424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.706447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.706464 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.809680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.809747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.809764 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.809789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.809813 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.913070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.913227 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.913248 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.913271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:32 crc kubenswrapper[4873]: I0219 09:45:32.913288 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:32Z","lastTransitionTime":"2026-02-19T09:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.016015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.016079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.016137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.016162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.016179 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.119280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.119365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.119390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.119423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.119447 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.222201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.222283 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.222309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.222342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.222367 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.325805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.325871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.325889 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.325913 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.325934 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.429021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.429094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.429139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.429165 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.429182 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.444523 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:22:48.181731419 +0000 UTC Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.483131 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:33 crc kubenswrapper[4873]: E0219 09:45:33.483280 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.483354 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.483445 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:33 crc kubenswrapper[4873]: E0219 09:45:33.483589 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:33 crc kubenswrapper[4873]: E0219 09:45:33.483774 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.532535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.532602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.532619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.532642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.532660 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.584727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:33 crc kubenswrapper[4873]: E0219 09:45:33.584959 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:33 crc kubenswrapper[4873]: E0219 09:45:33.585089 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:35.585059574 +0000 UTC m=+44.874491252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.635619 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.635682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.635703 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.635728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.635745 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.739227 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.739313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.739337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.739367 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.739390 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.842513 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.842556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.842568 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.842586 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.842598 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.945317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.945389 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.945407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.945430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:33 crc kubenswrapper[4873]: I0219 09:45:33.945448 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:33Z","lastTransitionTime":"2026-02-19T09:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.048661 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.048727 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.048742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.048760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.048772 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.152095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.152162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.152173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.152194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.152209 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.255166 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.255294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.255335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.255369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.255390 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.358458 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.358538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.358577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.358609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.358632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.445615 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:59:10.577655612 +0000 UTC Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.462359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.462434 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.462454 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.462481 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.462498 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.483722 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:34 crc kubenswrapper[4873]: E0219 09:45:34.483886 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.565046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.565134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.565154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.565180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.565197 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.668647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.668732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.668749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.668776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.668793 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.770867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.770915 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.770925 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.770939 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.770950 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.872811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.872868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.872886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.872912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.872928 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.975286 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.975347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.975360 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.975377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:34 crc kubenswrapper[4873]: I0219 09:45:34.975390 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:34Z","lastTransitionTime":"2026-02-19T09:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.077551 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.077593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.077602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.077616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.077625 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.216451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.216508 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.216524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.216545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.216559 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.319903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.320020 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.320047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.320071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.320087 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.423400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.423475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.423502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.423530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.423553 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.446693 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:00:13.837380047 +0000 UTC Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.484164 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.484250 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.484269 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:35 crc kubenswrapper[4873]: E0219 09:45:35.484358 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:35 crc kubenswrapper[4873]: E0219 09:45:35.484539 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:35 crc kubenswrapper[4873]: E0219 09:45:35.484699 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.526952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.527046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.527082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.527151 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.527183 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.608314 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:35 crc kubenswrapper[4873]: E0219 09:45:35.608720 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:35 crc kubenswrapper[4873]: E0219 09:45:35.608847 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:39.608818073 +0000 UTC m=+48.898249751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.630656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.630727 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.630743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.630766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.630784 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.733654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.733724 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.733748 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.733775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.733793 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.837331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.837440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.837459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.837487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.837510 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.940824 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.940909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.940931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.940962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:35 crc kubenswrapper[4873]: I0219 09:45:35.940978 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:35Z","lastTransitionTime":"2026-02-19T09:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.044183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.044264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.044284 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.044311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.044333 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.147717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.147790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.147808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.147846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.147865 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.251089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.251183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.251202 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.251226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.251247 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.354313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.354385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.354406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.354434 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.354452 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.447279 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:06:26.111353196 +0000 UTC Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.458078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.458198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.458220 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.458246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.458266 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.483754 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:36 crc kubenswrapper[4873]: E0219 09:45:36.483933 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.561842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.561908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.561928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.561953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.561971 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.665497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.665578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.665602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.665632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.665656 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.769721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.769780 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.769797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.769822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.769839 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.872668 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.872738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.872757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.872781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.872800 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.975551 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.975602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.975621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.975644 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:36 crc kubenswrapper[4873]: I0219 09:45:36.975662 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:36Z","lastTransitionTime":"2026-02-19T09:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.078349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.078407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.078425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.078448 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.078467 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.181501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.181573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.181606 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.181654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.181675 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.284875 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.284954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.284978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.285009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.285031 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.388081 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.388196 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.388215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.388241 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.388266 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.448658 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:37:00.755689598 +0000 UTC Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.483529 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.483571 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.483720 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.483759 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.483897 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.483991 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.490860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.490912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.490932 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.490957 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.490974 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.594633 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.594689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.594702 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.594720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.594733 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.620930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.620999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.621023 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.621050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.621068 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.641337 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:37Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.646463 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.646527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.646552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.646584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.646612 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.668813 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:37Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.672840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.672900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.672923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.672954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.672971 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.696713 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:37Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.701811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.701855 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.701869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.701890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.701906 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.719640 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:37Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.724273 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.724338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.724353 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.724377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.724393 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.744156 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:37Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:37 crc kubenswrapper[4873]: E0219 09:45:37.744304 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.746491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.746529 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.746543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.746564 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.746580 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.851745 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.851801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.851815 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.851833 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.851845 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.955225 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.955278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.955290 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.955309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:37 crc kubenswrapper[4873]: I0219 09:45:37.955322 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:37Z","lastTransitionTime":"2026-02-19T09:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.057985 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.058070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.058141 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.058176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.058203 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.160869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.160923 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.160941 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.160965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.160982 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.263487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.263556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.263582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.263612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.263634 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.367003 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.367071 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.367089 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.367145 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.367164 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.449537 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:03:14.124466165 +0000 UTC Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.469832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.469928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.469952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.469982 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.470006 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.483394 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:38 crc kubenswrapper[4873]: E0219 09:45:38.483548 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.573408 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.573454 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.573467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.573483 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.573495 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.675749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.675817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.675836 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.675865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.675887 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.779050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.779094 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.779140 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.779162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.779179 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.882197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.882255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.882268 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.882287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.882299 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.985212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.985258 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.985273 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.985293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:38 crc kubenswrapper[4873]: I0219 09:45:38.985305 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:38Z","lastTransitionTime":"2026-02-19T09:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.087763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.087811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.087823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.087840 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.087853 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.190853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.190900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.190913 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.190936 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.190949 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.293376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.293438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.293456 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.293480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.293498 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.397686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.397742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.397760 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.397791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.397807 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.450743 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:04:27.206997695 +0000 UTC Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.483619 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.483660 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:39 crc kubenswrapper[4873]: E0219 09:45:39.483845 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.483909 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:39 crc kubenswrapper[4873]: E0219 09:45:39.483980 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:39 crc kubenswrapper[4873]: E0219 09:45:39.484230 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.500676 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.500732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.500753 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.500778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.500797 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.603472 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.603554 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.603572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.603596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.603615 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.657731 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:39 crc kubenswrapper[4873]: E0219 09:45:39.657936 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:39 crc kubenswrapper[4873]: E0219 09:45:39.658068 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:47.658038444 +0000 UTC m=+56.947470122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.707076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.707184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.707204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.707228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.707246 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.811808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.811894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.811914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.811938 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.812016 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.915776 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.915825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.915837 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.915854 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:39 crc kubenswrapper[4873]: I0219 09:45:39.915867 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:39Z","lastTransitionTime":"2026-02-19T09:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.018666 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.018744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.018764 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.018791 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.018808 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.120820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.120886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.120905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.120930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.120947 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.223736 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.223808 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.223826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.223850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.223868 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.327087 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.327171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.327191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.327214 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.327232 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.430316 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.430460 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.430480 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.430504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.430524 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.451667 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:41:44.285565054 +0000 UTC Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.483444 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:40 crc kubenswrapper[4873]: E0219 09:45:40.483637 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.534403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.534470 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.534490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.534515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.534534 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.637561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.637640 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.637663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.637690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.637710 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.740975 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.741050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.741070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.741095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.741144 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.844405 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.844468 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.844487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.844511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.844529 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.947559 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.947635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.947657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.947684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:40 crc kubenswrapper[4873]: I0219 09:45:40.947705 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:40Z","lastTransitionTime":"2026-02-19T09:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.050369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.050420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.050433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.050451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.050463 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.153608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.153671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.153689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.153714 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.153733 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.256523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.256579 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.256634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.256659 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.256676 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.360825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.361259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.361503 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.361723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.361909 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.452659 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:47:29.881706536 +0000 UTC Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.466030 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.466097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.466148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.466182 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.466217 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.483417 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.483560 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:41 crc kubenswrapper[4873]: E0219 09:45:41.483593 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.483704 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:41 crc kubenswrapper[4873]: E0219 09:45:41.484087 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:41 crc kubenswrapper[4873]: E0219 09:45:41.484323 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.505139 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.530417 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.552239 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.569276 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.569331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.569347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.569368 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.569383 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.575944 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.596918 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.617659 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.640419 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.666004 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.672433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.672521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.672545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.672600 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.672623 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.680320 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.707449 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.732432 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.755958 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.776225 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.776320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.776338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.776428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.776468 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.791268 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.808517 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.833924 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.855390 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.879971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.880017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.880034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.880061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.880079 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.982942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.983036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.983060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.983092 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:41 crc kubenswrapper[4873]: I0219 09:45:41.983188 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:41Z","lastTransitionTime":"2026-02-19T09:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.086308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.086364 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.086382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.086409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.086431 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.189151 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.189234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.189257 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.189280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.189299 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.292631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.292675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.292687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.292704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.292716 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.395684 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.395747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.395763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.395784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.395799 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.453845 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:53:47.266382958 +0000 UTC Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.483226 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:42 crc kubenswrapper[4873]: E0219 09:45:42.483601 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.498911 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.499041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.499062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.499087 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.499132 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.601900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.601958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.601977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.602002 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.602020 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.705242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.705308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.705322 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.705338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.705348 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.808474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.808514 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.808525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.808543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.808556 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.910343 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.910387 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.910400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.910416 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:42 crc kubenswrapper[4873]: I0219 09:45:42.910428 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:42Z","lastTransitionTime":"2026-02-19T09:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.013842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.013927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.013952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.013981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.014003 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.117927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.117993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.118010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.118033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.118050 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.218060 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.218233 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218284 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:46:15.218246506 +0000 UTC m=+84.507678184 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.218339 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218363 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.218402 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218422 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:15.2184055 +0000 UTC m=+84.507837168 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.218481 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218649 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218711 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218735 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218754 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218719 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:15.218698207 +0000 UTC m=+84.508129885 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218813 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:15.21879855 +0000 UTC m=+84.508230218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218899 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218916 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218931 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.218975 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:15.218962294 +0000 UTC m=+84.508393972 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.222588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.222638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.222651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.222670 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.222685 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.325046 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.325129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.325148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.325171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.325187 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.440296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.440358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.440376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.440399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.440416 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.455055 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:33:14.237046189 +0000 UTC Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.483550 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.483698 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.484052 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.484327 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.484239 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:43 crc kubenswrapper[4873]: E0219 09:45:43.484689 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.544031 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.544418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.544583 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.544722 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.544860 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.649366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.649440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.649467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.649516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.649562 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.753627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.753742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.753763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.753788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.753805 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.857291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.857396 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.857415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.857439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.857460 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.961045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.961152 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.961170 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.961190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:43 crc kubenswrapper[4873]: I0219 09:45:43.961204 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:43Z","lastTransitionTime":"2026-02-19T09:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.064510 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.064567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.064585 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.064649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.064669 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.167617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.167700 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.167728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.167761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.167786 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.270477 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.270515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.270525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.270538 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.270547 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.373451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.373494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.373506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.373523 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.373536 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.455992 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:20:26.276547555 +0000 UTC Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.476582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.476667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.476688 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.476712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.476778 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.484302 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:44 crc kubenswrapper[4873]: E0219 09:45:44.484499 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.580814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.580879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.580919 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.580956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.580978 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.683868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.683940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.683965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.684002 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.684052 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.786967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.787050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.787070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.787096 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.787152 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.890494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.890550 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.890572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.890646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.890668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.993811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.993876 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.993905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.993937 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:44 crc kubenswrapper[4873]: I0219 09:45:44.993961 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:44Z","lastTransitionTime":"2026-02-19T09:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.096572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.096636 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.096660 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.096707 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.096726 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.199370 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.199440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.199459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.199485 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.199507 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.302933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.303062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.303086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.303134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.303158 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.406573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.406644 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.406666 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.406690 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.406710 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.456383 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:06:08.874490594 +0000 UTC Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.483464 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.483541 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.483501 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:45 crc kubenswrapper[4873]: E0219 09:45:45.483742 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:45 crc kubenswrapper[4873]: E0219 09:45:45.483813 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:45 crc kubenswrapper[4873]: E0219 09:45:45.483887 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.486070 4873 scope.go:117] "RemoveContainer" containerID="37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.509980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.510039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.510056 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.510079 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.510097 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.613060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.613819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.613960 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.614090 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.614254 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.717162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.717212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.717228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.717251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.717265 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.820398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.820464 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.820482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.820504 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.820522 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.843382 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/1.log" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.847146 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.847530 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.872997 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.895426 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.919722 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.924743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.924789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.924807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.924831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.924849 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:45Z","lastTransitionTime":"2026-02-19T09:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.957345 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.975311 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:45 crc kubenswrapper[4873]: I0219 09:45:45.998612 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.018220 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.027474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.027534 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.027552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.027574 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.027587 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.034802 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.048080 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.060277 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.071520 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.084895 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.092937 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.103858 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.105341 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.121599 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.129035 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.129061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.129070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.129084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.129094 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.135881 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.148346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.161555 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.174502 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.190520 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.203465 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.218247 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.229835 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.231536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.231566 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.231577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.231593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.231604 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.243469 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.268185 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.279561 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.292496 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.306952 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.322431 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.333806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.333835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.333843 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.333858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.333869 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.334763 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.349457 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.361366 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.377548 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.391786 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.436769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.436806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.436817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.436832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.436841 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.456957 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:14:30.156654077 +0000 UTC Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.483730 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:46 crc kubenswrapper[4873]: E0219 09:45:46.483879 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.539713 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.539753 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.539780 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.539796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.539806 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.643178 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.643245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.643266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.643292 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.643311 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.746185 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.746255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.746281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.746312 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.746334 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.849541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.849618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.849639 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.849671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.849691 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.852086 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/2.log" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.853145 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/1.log" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.857320 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" exitCode=1 Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.857438 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.857508 4873 scope.go:117] "RemoveContainer" containerID="37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.859276 4873 scope.go:117] "RemoveContainer" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" Feb 19 09:45:46 crc kubenswrapper[4873]: E0219 09:45:46.863375 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.884312 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.897252 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.912366 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.933570 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.946334 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.953717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.953784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.953818 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.953847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.953868 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:46Z","lastTransitionTime":"2026-02-19T09:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.959465 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.972062 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:46 crc kubenswrapper[4873]: I0219 09:45:46.990547 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:46Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.005654 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.022569 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.037978 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.052928 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.057142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.057213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.057238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.057271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.057296 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.084703 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37d6b1798b9f98114e3e6cea724b615c9aa0a796cdecbed6e2693d573e20c0e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"message\\\":\\\"311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:30.029768 6333 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:45:30.032587 6333 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 09:45:30.032616 6333 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 09:45:30.032670 6333 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 09:45:30.032733 6333 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 09:45:30.032745 6333 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:30.032824 6333 factory.go:656] Stopping watch factory\\\\nI0219 09:45:30.032844 6333 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:30.032881 6333 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:30.032897 6333 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:30.032908 6333 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:30.032931 6333 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.102944 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.123756 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.141478 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159185 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.159702 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.262125 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.262201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.262234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.262251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.262263 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.365147 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.365214 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.365234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.365261 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.365278 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.457876 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:08:11.578741529 +0000 UTC Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.468381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.468499 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.468519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.468543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.468562 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.483489 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.483563 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.483712 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.483771 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.483933 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.484084 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.571725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.571805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.571831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.571863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.571885 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.674161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.674191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.674201 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.674216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.674230 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.684682 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.684832 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.684892 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:03.68487704 +0000 UTC m=+72.974308688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.778037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.778197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.778219 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.778250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.778281 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.862939 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/2.log" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.867357 4873 scope.go:117] "RemoveContainer" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" Feb 19 09:45:47 crc kubenswrapper[4873]: E0219 09:45:47.867500 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.881754 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.881782 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.881794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.881807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.881819 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.885381 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.902448 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.918528 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.941391 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.959082 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.980581 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.985378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.985432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.985444 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.985462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:47 crc kubenswrapper[4873]: I0219 09:45:47.985477 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:47Z","lastTransitionTime":"2026-02-19T09:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.001534 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.024387 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.041792 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.058168 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.075881 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.088494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.088577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.088596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.088631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.088654 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.098265 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.106059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.106159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.106188 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.106222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.106247 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.117864 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.128733 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133228 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133246 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133260 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.133878 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.151294 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.153543 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.156733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.156783 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.156800 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.156820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.156836 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.169068 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.178327 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.182787 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.182825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.182838 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.182858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.182872 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.189005 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.204332 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.207000 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.209052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.209132 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.209152 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.209176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.209195 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.244481 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.244897 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.247350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.247414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.247438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.247467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.247488 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.264417 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.286807 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.299532 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.313552 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.327482 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.341152 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.349517 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.349558 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.349570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.349588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.349599 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.365059 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.380235 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.397346 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.411047 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.424418 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.444906 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.451967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.452005 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.452017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.452034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.452047 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.456749 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.458800 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:00:21.103018127 +0000 UTC Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.468632 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.483284 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.483277 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: E0219 09:45:48.483458 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.502276 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.513820 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.555475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.555799 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.555940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.556077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.556280 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.659609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.659673 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.659691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.659715 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.659734 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.784864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.784927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.784950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.784978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.784999 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.887174 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.887217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.887229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.887245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.887257 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.990238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.990280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.990295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.990313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:48 crc kubenswrapper[4873]: I0219 09:45:48.990327 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:48Z","lastTransitionTime":"2026-02-19T09:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.094461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.094537 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.094564 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.094597 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.094731 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.197762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.197827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.197847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.197874 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.197894 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.301934 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.301991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.302006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.302025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.302039 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.405167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.405243 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.405263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.405288 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.405310 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.459386 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:04:58.807351495 +0000 UTC Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.484474 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.484586 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:49 crc kubenswrapper[4873]: E0219 09:45:49.484676 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:49 crc kubenswrapper[4873]: E0219 09:45:49.484844 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.484483 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:49 crc kubenswrapper[4873]: E0219 09:45:49.485026 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.507858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.507905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.507922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.507944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.507962 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.611300 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.611365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.611388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.611418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.611441 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.714662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.714699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.714712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.714728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.714738 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.817556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.817629 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.817656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.817734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.817761 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.921244 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.921313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.921331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.921357 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:49 crc kubenswrapper[4873]: I0219 09:45:49.921375 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:49Z","lastTransitionTime":"2026-02-19T09:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.024270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.024340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.024357 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.024382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.024398 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.127034 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.127074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.127086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.127127 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.127150 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.229784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.229846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.229865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.229891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.229951 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.333478 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.333539 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.333557 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.333578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.333592 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.435802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.435834 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.435846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.435860 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.435872 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.459602 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:02:34.816472488 +0000 UTC Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.484041 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:50 crc kubenswrapper[4873]: E0219 09:45:50.484280 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.539311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.539366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.539387 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.539411 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.539429 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.642630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.642712 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.642734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.642762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.642782 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.745341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.745397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.745414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.745437 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.745453 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.848277 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.848323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.848339 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.848361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.848378 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.951781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.951847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.951864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.951886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:50 crc kubenswrapper[4873]: I0219 09:45:50.951903 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:50Z","lastTransitionTime":"2026-02-19T09:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.054956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.055010 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.055029 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.055051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.055067 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.157541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.157603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.157623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.157652 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.157676 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.260133 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.260173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.260213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.260232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.260272 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.368647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.369221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.369417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.369630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.369797 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.460685 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:20:18.086299302 +0000 UTC Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.473347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.473398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.473417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.473439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.473455 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.483192 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.483410 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.483377 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:51 crc kubenswrapper[4873]: E0219 09:45:51.483822 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:51 crc kubenswrapper[4873]: E0219 09:45:51.484043 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:51 crc kubenswrapper[4873]: E0219 09:45:51.484217 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.500411 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.514127 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.530264 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.546208 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.560894 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.575857 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.575933 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.575958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.575991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.576012 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.583211 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.607623 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.628234 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.657700 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.674173 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.679810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.680063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.680261 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.680414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.680560 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.695512 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.712489 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.732077 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.750061 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.765173 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.782931 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.783451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.783590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.783711 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.783806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.783901 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.803153 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.886096 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.886175 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.886194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.886220 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.886253 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.988590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.988641 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.988658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.988682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:51 crc kubenswrapper[4873]: I0219 09:45:51.988699 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:51Z","lastTransitionTime":"2026-02-19T09:45:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.091965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.092012 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.092025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.092042 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.092054 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.194907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.195407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.195571 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.195722 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.195888 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.299786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.299832 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.299849 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.299873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.299890 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.403069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.403502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.403662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.403819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.403985 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.460874 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:30:26.455476064 +0000 UTC Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.483190 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:52 crc kubenswrapper[4873]: E0219 09:45:52.483382 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.507530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.507572 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.507591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.507612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.507629 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.611184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.611253 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.611271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.611293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.611312 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.714183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.714237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.714262 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.714288 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.714307 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.817451 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.817516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.817536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.817561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.817579 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.920057 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.920498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.920709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.920891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:52 crc kubenswrapper[4873]: I0219 09:45:52.921049 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:52Z","lastTransitionTime":"2026-02-19T09:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.024263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.024328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.024346 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.024369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.024397 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.127566 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.127614 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.127625 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.127647 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.127660 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.230794 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.230850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.230862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.230879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.230890 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.335937 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.335992 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.336008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.336045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.336060 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.439077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.439170 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.439189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.439212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.439229 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.463258 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:49:40.297600798 +0000 UTC Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.484389 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.484472 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.484520 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:53 crc kubenswrapper[4873]: E0219 09:45:53.484633 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:53 crc kubenswrapper[4873]: E0219 09:45:53.484757 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:53 crc kubenswrapper[4873]: E0219 09:45:53.484906 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.542489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.542546 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.542565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.542589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.542606 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.645810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.645885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.645909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.645940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.645962 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.748602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.748667 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.748685 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.748708 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.748726 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.851390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.851447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.851465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.851488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.851506 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.955060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.955148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.955167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.955191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:53 crc kubenswrapper[4873]: I0219 09:45:53.955208 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:53Z","lastTransitionTime":"2026-02-19T09:45:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.057907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.057947 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.057959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.057976 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.057987 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.160181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.160223 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.160237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.160254 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.160264 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.262133 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.262159 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.262167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.262178 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.262187 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.363512 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.363535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.363542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.363552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.363559 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.464432 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:05:16.706537834 +0000 UTC Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.466349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.466403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.466424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.466450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.466471 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.484186 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:54 crc kubenswrapper[4873]: E0219 09:45:54.484434 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.569475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.569542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.569565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.569592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.569613 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.673501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.673646 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.673668 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.673693 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.673713 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.776987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.777050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.777070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.777129 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.777147 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.879385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.879430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.879441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.879459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.879470 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.981641 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.981702 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.981720 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.981744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:54 crc kubenswrapper[4873]: I0219 09:45:54.981765 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:54Z","lastTransitionTime":"2026-02-19T09:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.084605 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.084653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.084669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.084689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.084705 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.187050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.187197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.187221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.187243 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.187260 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.289369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.289414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.289423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.289436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.289444 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.391958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.392033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.392059 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.392088 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.392152 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.493457 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:29:20.900766694 +0000 UTC Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.494595 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.494628 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:55 crc kubenswrapper[4873]: E0219 09:45:55.494796 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.494814 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:55 crc kubenswrapper[4873]: E0219 09:45:55.494872 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:55 crc kubenswrapper[4873]: E0219 09:45:55.494939 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.497014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.497069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.497090 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.497149 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.497173 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.600181 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.600259 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.600284 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.600317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.600342 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.704032 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.704098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.704131 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.704153 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.704176 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.806916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.806954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.806963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.806976 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.806985 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.909099 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.909191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.909214 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.909242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:55 crc kubenswrapper[4873]: I0219 09:45:55.909264 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:55Z","lastTransitionTime":"2026-02-19T09:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.010973 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.011000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.011008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.011019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.011028 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.114340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.114374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.114386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.114402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.114416 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.217979 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.218063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.218084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.218154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.218193 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.320928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.321009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.321033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.321063 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.321084 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.424514 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.424574 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.424591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.424616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.424634 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.483573 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:56 crc kubenswrapper[4873]: E0219 09:45:56.483803 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.494549 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 21:54:46.192322339 +0000 UTC Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.526569 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.526601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.526610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.526623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.526632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.629337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.629392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.629404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.629419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.629430 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.731962 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.732027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.732044 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.732070 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.732086 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.834626 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.834694 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.834710 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.834731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.834747 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.937848 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.937907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.937928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.937954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:56 crc kubenswrapper[4873]: I0219 09:45:56.937970 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:56Z","lastTransitionTime":"2026-02-19T09:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.041202 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.041344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.041368 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.041392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.041409 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.144502 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.144552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.144567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.144589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.144603 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.246986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.247045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.247062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.247084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.247131 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.355632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.355704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.355723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.355746 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.355766 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.458998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.459087 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.459134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.459163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.459182 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.483633 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.483665 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:57 crc kubenswrapper[4873]: E0219 09:45:57.483828 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.483890 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:57 crc kubenswrapper[4873]: E0219 09:45:57.484082 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:57 crc kubenswrapper[4873]: E0219 09:45:57.484354 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.494911 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:40:49.426082993 +0000 UTC Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.562829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.562903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.562924 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.562947 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.563003 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.665926 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.665964 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.665972 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.665986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.666010 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.768186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.768220 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.768231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.768268 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.768281 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.870846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.870907 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.870925 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.870950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.870970 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.973125 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.973175 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.973192 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.973208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:57 crc kubenswrapper[4873]: I0219 09:45:57.973218 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:57Z","lastTransitionTime":"2026-02-19T09:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.075441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.075487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.075498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.075517 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.075529 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.177885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.177925 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.177934 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.177948 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.177959 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.280747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.280786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.280795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.280809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.280819 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.382988 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.383036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.383047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.383065 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.383097 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.484070 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.484232 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.485473 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.485509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.485521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.485536 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.485549 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.495800 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:53:20.913026942 +0000 UTC Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.539369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.539406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.539417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.539430 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.539440 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.551883 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.556433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.556476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.556488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.556505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.556517 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.571709 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.575737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.575852 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.575927 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.575957 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.575975 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.590363 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.593299 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.593404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.593431 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.593453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.593470 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.612454 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.617662 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.617718 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.617739 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.617769 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.617795 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.636558 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:45:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:45:58 crc kubenswrapper[4873]: E0219 09:45:58.636765 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.638987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.639069 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.639132 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.639155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.639172 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.741969 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.742015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.742036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.742060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.742077 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.844383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.844408 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.844417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.844429 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.844437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.946174 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.946205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.946217 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.946232 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:58 crc kubenswrapper[4873]: I0219 09:45:58.946243 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:58Z","lastTransitionTime":"2026-02-19T09:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.048049 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.048086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.048132 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.048148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.048156 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.150255 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.150296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.150308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.150322 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.150333 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.251937 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.251971 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.251980 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.251994 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.252008 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.354881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.354984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.355003 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.355060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.355078 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.459008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.459045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.459055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.459068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.459078 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.483644 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.483792 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:45:59 crc kubenswrapper[4873]: E0219 09:45:59.484048 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.484075 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:45:59 crc kubenswrapper[4873]: E0219 09:45:59.484178 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:45:59 crc kubenswrapper[4873]: E0219 09:45:59.484364 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.495978 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:08:09.048603072 +0000 UTC Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.561870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.561896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.561905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.561917 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.561924 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.664346 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.664383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.664394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.664410 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.664422 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.767678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.767733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.767744 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.767757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.767789 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.869958 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.870009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.870022 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.870037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.870046 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.975623 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.975670 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.975680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.975695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:45:59 crc kubenswrapper[4873]: I0219 09:45:59.975710 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:45:59Z","lastTransitionTime":"2026-02-19T09:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.077537 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.077589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.077602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.077621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.077634 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.179084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.179150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.179160 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.179173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.179182 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.281467 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.281505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.281514 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.281525 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.281534 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.383456 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.383489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.383500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.383516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.383527 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.483454 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:00 crc kubenswrapper[4873]: E0219 09:46:00.483595 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.485439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.485501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.485519 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.485542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.485559 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.496914 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:29:23.885835248 +0000 UTC Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.588849 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.588900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.588914 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.588929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.588942 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.691863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.691910 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.691922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.691939 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.691950 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.793892 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.793956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.793966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.793981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.793991 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.896267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.896313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.896327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.896374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.896386 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.999250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.999282 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.999291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.999311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:00 crc kubenswrapper[4873]: I0219 09:46:00.999319 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:00Z","lastTransitionTime":"2026-02-19T09:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.101953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.102068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.102086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.102142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.102180 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.205320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.205360 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.205371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.205388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.205400 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.307968 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.308008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.308021 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.308038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.308049 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.410887 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.410945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.410965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.410987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.411004 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.483469 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:01 crc kubenswrapper[4873]: E0219 09:46:01.483734 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.484541 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.484628 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:01 crc kubenswrapper[4873]: E0219 09:46:01.484685 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:01 crc kubenswrapper[4873]: E0219 09:46:01.484798 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.497093 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:06:32.447348871 +0000 UTC Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.497552 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.513000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.513029 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.513038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.513052 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.513061 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.523392 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.532704 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.544006 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.560321 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.577407 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.596188 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.607810 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.615359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.615392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.615406 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.615424 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.615437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.618870 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.630921 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.641656 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.657267 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.671122 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.688764 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.706515 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.718366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.718416 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.718433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.718456 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.718474 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.721843 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.735670 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.820870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.821198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.821330 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.821452 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.821595 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.923862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.923904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.923921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.923942 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:01 crc kubenswrapper[4873]: I0219 09:46:01.923959 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:01Z","lastTransitionTime":"2026-02-19T09:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.027598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.027656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.027680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.027709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.027728 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.130663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.130723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.130737 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.130754 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.130767 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.233390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.233639 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.233702 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.233768 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.233823 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.336741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.336947 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.337038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.337122 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.337193 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.439602 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.439643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.439654 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.439669 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.439680 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.483261 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:02 crc kubenswrapper[4873]: E0219 09:46:02.483638 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.497732 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:36:27.556930897 +0000 UTC Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.542816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.542873 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.542891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.542919 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.542938 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.645896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.645939 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.645950 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.645966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.645976 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.748365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.748403 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.748413 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.748428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.748437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.850534 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.850590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.850607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.850631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.850649 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.952721 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.952789 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.952812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.952835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:02 crc kubenswrapper[4873]: I0219 09:46:02.952852 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:02Z","lastTransitionTime":"2026-02-19T09:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.055095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.055154 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.055163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.055179 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.055189 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.158474 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.158527 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.158541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.158559 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.158572 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.260561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.260617 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.260635 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.260657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.260674 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.362861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.362922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.362945 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.362973 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.362994 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.465278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.465350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.465368 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.465394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.465410 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.483967 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.483970 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.483993 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.484222 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.484353 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.484427 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.485063 4873 scope.go:117] "RemoveContainer" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.485310 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.499258 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:55:57.973429486 +0000 UTC Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.568011 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.568068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.568084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.568134 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.568152 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.671757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.672062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.672249 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.672399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.672541 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.774810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.774842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.774851 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.774862 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.774872 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.781885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.782220 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:46:03 crc kubenswrapper[4873]: E0219 09:46:03.782385 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:46:35.782347014 +0000 UTC m=+105.071778862 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.877401 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.877470 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.877489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.877515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.877533 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.980695 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.980765 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.980778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.980795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:03 crc kubenswrapper[4873]: I0219 09:46:03.980806 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:03Z","lastTransitionTime":"2026-02-19T09:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.084802 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.084884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.084905 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.084931 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.084950 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.188612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.188672 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.188689 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.188713 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.188735 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.291189 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.291237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.291250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.291267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.291279 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.393251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.393298 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.393311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.393347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.393360 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.483481 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:04 crc kubenswrapper[4873]: E0219 09:46:04.483754 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.495549 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.495609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.495627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.495651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.495668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.499853 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:28:14.126187582 +0000 UTC Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.598509 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.598827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.598922 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.599014 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.599114 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.702267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.702335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.702359 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.702389 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.702414 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.805462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.805725 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.805795 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.805888 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.805978 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.908782 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.908837 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.908850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.908869 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.908883 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:04Z","lastTransitionTime":"2026-02-19T09:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.927528 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/0.log" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.927765 4873 generic.go:334] "Generic (PLEG): container finished" podID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" containerID="6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003" exitCode=1 Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.927821 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerDied","Data":"6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003"} Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.928396 4873 scope.go:117] "RemoveContainer" containerID="6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.947254 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.960162 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.979989 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:04 crc kubenswrapper[4873]: I0219 09:46:04.993546 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.007643 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.017282 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.017329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.017342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.017360 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.017372 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.022811 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.038724 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.064241 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.078180 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.097565 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.111307 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.119932 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.119961 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.119969 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.119982 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.119992 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.127258 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.137442 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.146553 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.158165 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.167907 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.176813 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.221865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.221898 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.221909 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.221926 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.221937 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.324327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.324365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.324377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.324393 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.324405 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.426884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.426948 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.426967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.426991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.427008 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.483540 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.483604 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:05 crc kubenswrapper[4873]: E0219 09:46:05.483713 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.483772 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:05 crc kubenswrapper[4873]: E0219 09:46:05.483940 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:05 crc kubenswrapper[4873]: E0219 09:46:05.484016 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.499999 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:53:13.366310048 +0000 UTC Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.529703 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.529766 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.529790 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.529822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.529844 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.632930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.632981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.632997 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.633019 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.633037 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.735618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.735678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.735696 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.735719 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.735735 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.838823 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.838908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.838935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.838968 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.838992 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.934152 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/0.log" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.934215 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerStarted","Data":"81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.940438 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.940469 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.940494 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.940511 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.940520 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:05Z","lastTransitionTime":"2026-02-19T09:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.954008 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.969068 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:05 crc kubenswrapper[4873]: I0219 09:46:05.981802 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.002837 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.015257 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.031128 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.043465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.043531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.043553 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.043582 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.043607 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.047044 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.061188 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.087055 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.100945 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.117717 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.135191 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.146068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.146130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.146147 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.146167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.146185 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.152983 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.169319 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.184880 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.199081 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.213280 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.248121 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.248164 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.248176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.248195 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.248207 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.350951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.351001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.351013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.351029 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.351041 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.453718 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.453770 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.453788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.453813 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.453832 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.483157 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:06 crc kubenswrapper[4873]: E0219 09:46:06.483333 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.500262 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:52:37.170349495 +0000 UTC Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.556609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.556645 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.556656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.556671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.556681 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.658994 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.659067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.659086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.659138 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.659156 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.762388 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.762439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.762459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.762482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.762498 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.865229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.865281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.865291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.865305 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.865316 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.968039 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.968231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.968257 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.968282 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:06 crc kubenswrapper[4873]: I0219 09:46:06.968301 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:06Z","lastTransitionTime":"2026-02-19T09:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.070710 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.070822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.070849 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.070879 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.070898 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.173960 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.174040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.174066 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.174095 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.174142 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.277500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.277556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.277573 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.277596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.277613 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.380821 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.380859 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.380870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.380885 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.380895 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.483718 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.483794 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:07 crc kubenswrapper[4873]: E0219 09:46:07.483914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:07 crc kubenswrapper[4873]: E0219 09:46:07.484127 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484223 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484541 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.484692 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: E0219 09:46:07.484314 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.500719 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:11:26.089371297 +0000 UTC Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.587364 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.587441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.587462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.587491 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.587512 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.689881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.689944 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.689961 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.689989 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.690005 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.792799 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.792946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.792974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.793003 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.793028 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.896613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.896675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.896701 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.896732 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:07 crc kubenswrapper[4873]: I0219 09:46:07.896814 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:07Z","lastTransitionTime":"2026-02-19T09:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.000225 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.000323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.000349 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.000380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.000406 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.104266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.104335 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.104357 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.104383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.104402 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.208023 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.208090 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.208174 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.208206 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.208231 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.313037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.313142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.313163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.313192 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.313214 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.416446 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.416500 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.416518 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.416543 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.416561 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.484014 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.484244 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.501303 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:23:02.926232579 +0000 UTC Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.501407 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.520030 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.520150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.520192 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.520223 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.520249 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.623637 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.623704 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.623733 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.623762 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.623784 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.727317 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.727379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.727396 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.727421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.727440 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.830047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.830155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.830176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.830197 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.830212 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.831402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.831441 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.831455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.831471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.831483 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.845615 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.850183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.850234 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.850247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.850264 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.850277 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.865085 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.869371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.869410 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.869421 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.869437 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.869450 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.887912 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.892136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.892166 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.892177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.892190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.892201 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.908520 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.912131 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.912158 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.912166 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.912177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.912186 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.925055 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:08 crc kubenswrapper[4873]: E0219 09:46:08.925181 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.933003 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.933051 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.933144 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.933173 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:08 crc kubenswrapper[4873]: I0219 09:46:08.933194 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:08Z","lastTransitionTime":"2026-02-19T09:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.037289 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.037358 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.037377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.037402 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.037420 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.140535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.140615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.140651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.140680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.140703 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.244309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.244394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.244418 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.244447 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.244469 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.348278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.348380 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.348399 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.348420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.348437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.451807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.451856 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.451872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.451894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.451910 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.483667 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:09 crc kubenswrapper[4873]: E0219 09:46:09.483807 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.483667 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.483861 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:09 crc kubenswrapper[4873]: E0219 09:46:09.483914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:09 crc kubenswrapper[4873]: E0219 09:46:09.484055 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.501413 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:52:06.309609187 +0000 UTC Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.554506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.554565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.554584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.554607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.554625 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.657834 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.658188 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.658361 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.658521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.658668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.762521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.762949 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.763205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.763423 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.763603 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.866896 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.866999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.867018 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.867076 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.867099 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.970481 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.970545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.970569 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.970598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:09 crc kubenswrapper[4873]: I0219 09:46:09.970622 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:09Z","lastTransitionTime":"2026-02-19T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.074135 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.074208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.074231 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.074262 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.074284 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.177466 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.177549 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.177567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.177625 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.177644 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.280680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.280781 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.280800 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.280825 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.280843 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.384162 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.384292 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.384384 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.384524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.384600 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.484071 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:10 crc kubenswrapper[4873]: E0219 09:46:10.484318 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.487650 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.487696 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.487709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.487731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.487746 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.501835 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:10:00.93981824 +0000 UTC Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.590785 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.590842 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.590872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.590895 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.590916 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.693601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.693655 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.693671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.693700 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.693717 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.797147 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.797183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.797194 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.797209 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.797220 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.900531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.900598 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.900616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.900641 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:10 crc kubenswrapper[4873]: I0219 09:46:10.900657 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:10Z","lastTransitionTime":"2026-02-19T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.004221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.004277 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.004307 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.004333 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.004351 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.108045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.108271 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.108304 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.108328 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.108346 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.211013 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.211074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.211093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.211185 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.211205 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.314093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.314180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.314200 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.314226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.314244 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.416753 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.416829 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.416846 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.416872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.416889 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.483848 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.483947 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.483868 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:11 crc kubenswrapper[4873]: E0219 09:46:11.484045 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:11 crc kubenswrapper[4873]: E0219 09:46:11.484228 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:11 crc kubenswrapper[4873]: E0219 09:46:11.484374 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.502303 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:59:35.786869788 +0000 UTC Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.505180 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.520886 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.520934 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.520955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.520986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.521010 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.523869 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.543610 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.564021 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.588339 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.607447 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.623796 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.623853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.623870 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.623894 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.623911 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.628675 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.649540 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.669771 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.701783 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.719427 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.727191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.727245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.727263 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.727287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.727304 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.743850 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.763781 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.784952 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.804017 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.820195 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.830398 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.830488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.830506 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.830530 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.830547 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.836524 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7c8560-6ad7-4360-9a1d-3137133ef615\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.855735 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.933814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.933903 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.933998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.934075 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:11 crc kubenswrapper[4873]: I0219 09:46:11.934154 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:11Z","lastTransitionTime":"2026-02-19T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.036577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.036904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.036916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.036930 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.036961 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.139747 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.139881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.139912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.139939 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.139960 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.244522 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.244591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.244616 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.244645 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.244667 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.348086 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.348198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.348222 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.348251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.348274 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.451137 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.451198 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.451215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.451237 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.451254 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.483278 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:12 crc kubenswrapper[4873]: E0219 09:46:12.483446 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.503415 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:39:04.683738931 +0000 UTC Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.554916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.554977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.554995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.555020 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.555040 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.658092 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.658208 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.658229 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.658257 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.658274 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.761294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.761351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.761369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.761394 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.761413 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.864060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.864190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.864212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.864236 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.864254 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.966663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.966729 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.966749 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.966773 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:12 crc kubenswrapper[4873]: I0219 09:46:12.966795 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:12Z","lastTransitionTime":"2026-02-19T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.072267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.072331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.072350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.072375 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.072392 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.175516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.175607 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.175627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.175651 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.175668 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.278238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.278272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.278281 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.278295 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.278304 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.381786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.381859 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.381877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.381901 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.381918 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.483325 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.483330 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:13 crc kubenswrapper[4873]: E0219 09:46:13.483508 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.483354 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:13 crc kubenswrapper[4873]: E0219 09:46:13.483639 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:13 crc kubenswrapper[4873]: E0219 09:46:13.483870 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.484812 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.484867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.484884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.484908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.484927 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.504024 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:08:52.231053931 +0000 UTC Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.588701 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.588767 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.588784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.588807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.588824 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.693585 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.693643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.693666 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.693693 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.693714 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.796631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.796716 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.796740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.796772 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.796795 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.900594 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.900673 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.900710 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.900740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:13 crc kubenswrapper[4873]: I0219 09:46:13.900761 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:13Z","lastTransitionTime":"2026-02-19T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.003296 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.003338 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.003351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.003366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.003379 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.105554 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.105605 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.105615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.105628 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.105660 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.208820 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.208868 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.208884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.208906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.208925 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.312498 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.312567 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.312590 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.312621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.312643 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.415432 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.415552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.415581 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.415609 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.415632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.483387 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:14 crc kubenswrapper[4873]: E0219 09:46:14.483869 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.504704 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:02:42.24266388 +0000 UTC Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.509870 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.523742 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.523999 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.524027 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.524060 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.524083 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.626716 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.626768 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.626788 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.626981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.626998 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.730417 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.730471 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.730489 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.730514 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.730535 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.833831 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.833899 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.833926 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.833956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.833976 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.937160 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.937250 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.937273 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.937371 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:14 crc kubenswrapper[4873]: I0219 09:46:14.937394 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:14Z","lastTransitionTime":"2026-02-19T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.040630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.040681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.040697 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.040719 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.040735 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.144037 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.144163 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.144190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.144221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.144240 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.246658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.246717 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.246735 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.246757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.246773 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.316527 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.316685 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316711 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.316674779 +0000 UTC m=+148.606106427 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.316757 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.316809 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.316850 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316856 4873 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316891 4873 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316941 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.316930565 +0000 UTC m=+148.606362213 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316960 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.316951525 +0000 UTC m=+148.606383173 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.316982 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317030 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317063 4873 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317078 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317199 4873 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317236 4873 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317200 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.31716659 +0000 UTC m=+148.606598268 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.317343 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.317323024 +0000 UTC m=+148.606754702 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.350304 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.350404 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.350428 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.350453 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.350471 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.453920 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.453974 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.453993 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.454016 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.454035 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.484402 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.484473 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.484473 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.484604 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.484805 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:15 crc kubenswrapper[4873]: E0219 09:46:15.484914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.505635 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:57:15.324700034 +0000 UTC Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.556900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.556967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.556984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.557009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.557026 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.659592 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.659631 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.659643 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.659658 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.659670 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.762150 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.762203 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.762221 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.762245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.762261 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.865482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.865535 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.865552 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.865576 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.865593 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.967736 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.967967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.967981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.967998 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:15 crc kubenswrapper[4873]: I0219 09:46:15.968012 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:15Z","lastTransitionTime":"2026-02-19T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.070801 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.070863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.070880 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.070904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.070921 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.174408 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.174442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.174454 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.174468 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.174477 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.277685 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.277741 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.277757 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.277784 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.277804 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.381533 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.381591 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.381608 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.381638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.381655 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.483087 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:16 crc kubenswrapper[4873]: E0219 09:46:16.483309 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.485270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.485323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.485341 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.485366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.485385 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.506080 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:32:13.175358078 +0000 UTC Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.589382 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.589439 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.589461 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.589490 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.589510 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.692946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.693015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.693038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.693067 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.693089 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.795901 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.795956 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.795977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.796006 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.796029 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.899073 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.899176 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.899205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.899233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:16 crc kubenswrapper[4873]: I0219 09:46:16.899253 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:16Z","lastTransitionTime":"2026-02-19T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.002084 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.002186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.002215 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.002251 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.002287 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.105600 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.105670 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.105698 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.105734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.105756 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.209803 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.209900 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.209921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.209977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.209997 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.312959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.313370 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.313531 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.313671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.313801 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.417414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.417918 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.418068 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.418280 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.418448 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.483760 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.483778 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.483760 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:17 crc kubenswrapper[4873]: E0219 09:46:17.484195 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:17 crc kubenswrapper[4873]: E0219 09:46:17.485183 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:17 crc kubenswrapper[4873]: E0219 09:46:17.485401 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.485965 4873 scope.go:117] "RemoveContainer" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.506900 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:59:50.974744898 +0000 UTC Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.521497 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.521554 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.521579 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.521610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.521631 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.626050 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.626183 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.626204 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.626260 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.626280 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.728542 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.728597 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.728615 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.728638 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.728655 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.831353 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.831415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.831433 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.831459 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.831477 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.933767 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.933798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.933810 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.933827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.933838 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:17Z","lastTransitionTime":"2026-02-19T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.978923 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/2.log" Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.981732 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308"} Feb 19 09:46:17 crc kubenswrapper[4873]: I0219 09:46:17.982844 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.002774 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.034142 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.040337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.040378 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.040391 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.040409 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.040422 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.050470 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.071263 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.086950 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.108470 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.127755 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.142814 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.142882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.142898 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.142918 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.142933 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.145085 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.161559 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7c8560-6ad7-4360-9a1d-3137133ef615\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.182388 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.198648 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.215733 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.242149 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8712bdb-10cc-4cf7-aa21-7f34761ec6cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37236dcd27e2d6048ee4976e2f411df9eaf77c3bff9f07f06243e70b01cb63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72381fa08fdd672973e1c6e175291b3d89a8f375117c23c7efc939fead3f62ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ec168f9dd0b793cb746abec5fd0922cbabf69b71ae55d81c0523be8fac9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95584c308545e3ea96b9d6bd6712f4e8d355baeae70edf62e3499a0e984a13af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2155b0ec1873858607c3d52b287a5533d1bbf707b7a39b4ca93e73177b2b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.245978 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.246028 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.246045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.246082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.246097 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.259187 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.279417 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.298384 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.315478 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.336072 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.349735 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.349786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.349798 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.349817 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.349828 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.358626 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.453015 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.453078 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.453097 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.453151 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.453170 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.483649 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:18 crc kubenswrapper[4873]: E0219 09:46:18.483828 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.507965 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:35:13.944160084 +0000 UTC Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.556632 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.556675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.556691 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.556735 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.556751 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.659835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.659897 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.659916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.659941 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.659959 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.763269 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.763331 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.763351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.763379 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.763400 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.866861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.866921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.866940 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.866963 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.866983 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.970495 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.970566 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.970584 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.970610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.970626 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:18Z","lastTransitionTime":"2026-02-19T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.988261 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/3.log" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.989420 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/2.log" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.996886 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" exitCode=1 Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.997017 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308"} Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.997154 4873 scope.go:117] "RemoveContainer" containerID="5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91" Feb 19 09:46:18 crc kubenswrapper[4873]: I0219 09:46:18.998284 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:46:18 crc kubenswrapper[4873]: E0219 09:46:18.998648 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.039476 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8712bdb-10cc-4cf7-aa21-7f34761ec6cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37236dcd27e2d6048ee4976e2f411df9eaf77c3bff9f07f06243e70b01cb63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72381fa08fdd672973e1c6e175291b3d89a8f375117c23c7efc939fead3f62ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ec168f9dd0b793cb746abec5fd0922cbabf69b71ae55d81c0523be8fac9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95584c308545e3ea96b9d6bd6712f4e8d355baeae70edf62e3499a0e984a13af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2155b0ec1873858607c3d52b287a5533d1bbf707b7a39b4ca93e73177b2b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.059872 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.072822 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.072882 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.072906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.072936 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.072958 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.076638 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.094366 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.112501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.112575 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.112593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.112618 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.112635 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.115312 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.135796 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.140847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.140912 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.140936 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.140966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.140988 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.141136 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.164046 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.166212 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.172861 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.172920 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.172941 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.172964 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.172981 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.190473 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.194268 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.195272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.195311 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.195323 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.195340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.195354 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.210863 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.219623 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.224906 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.224961 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.224981 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.225004 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.225022 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.228852 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.243895 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c923eae3-7568-4314-b0e0-48838f6e14fe\\\",\\\"systemUUID\\\":\\\"9f3a4afb-9582-465c-ace5-f370996d8eea\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.244172 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.246272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.246327 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.246344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.246373 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.246393 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.248030 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.271329 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.301835 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cadbc18d55dbcd16c2feb411be593e571abf164f466518982ee8271c4d3cc91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"message\\\":\\\"mespace event handler 1 for removal\\\\nI0219 09:45:46.434884 6553 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 09:45:46.434869 6553 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:45:46.434907 6553 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 09:45:46.434918 6553 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 09:45:46.434925 6553 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 09:45:46.434927 6553 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 09:45:46.434932 6553 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 09:45:46.434970 6553 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 09:45:46.434920 6553 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 09:45:46.434985 6553 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 09:45:46.434982 6553 factory.go:656] Stopping watch factory\\\\nI0219 09:45:46.434994 6553 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 09:45:46.436159 6553 ovnkube.go:599] Stopped ovnkube\\\\nI0219 09:45:46.436204 6553 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 09:45:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:18Z\\\",\\\"message\\\":\\\"alName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 09:46:18.424282 6989 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager-operator/metrics\\\\\\\"}\\\\nF0219 09:46:18.424294 6989 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.318824 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.333818 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7c8560-6ad7-4360-9a1d-3137133ef615\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.350293 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.350350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.350368 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.350392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.350409 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.352532 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.371416 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.390295 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.408774 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.453648 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.453711 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.453731 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.453754 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.453772 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.484094 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.484349 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.484721 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.484878 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.485243 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:19 crc kubenswrapper[4873]: E0219 09:46:19.485367 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.508519 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:26:12.613388434 +0000 UTC Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.557093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.557213 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.557270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.557309 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.557333 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.660148 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.660218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.660243 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.660267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.660284 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.762986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.763058 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.763074 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.763098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.763142 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.866779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.866838 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.866854 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.866877 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.866896 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.969763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.969835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.969850 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.969872 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:19 crc kubenswrapper[4873]: I0219 09:46:19.969886 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:19Z","lastTransitionTime":"2026-02-19T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.003712 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/3.log" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.009735 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:46:20 crc kubenswrapper[4873]: E0219 09:46:20.009991 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.032603 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.055562 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.074287 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.074362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.074385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.074416 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.074437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.076860 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.098217 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.126550 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.146243 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.173947 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.177151 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.177218 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.177236 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.177266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.177285 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.196075 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.219323 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.251454 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:18Z\\\",\\\"message\\\":\\\"alName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 09:46:18.424282 6989 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager-operator/metrics\\\\\\\"}\\\\nF0219 09:46:18.424294 6989 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.271566 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.280272 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.280325 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.280344 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.280366 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.280383 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.289183 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7c8560-6ad7-4360-9a1d-3137133ef615\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.307893 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.328683 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.347576 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.379765 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8712bdb-10cc-4cf7-aa21-7f34761ec6cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37236dcd27e2d6048ee4976e2f411df9eaf77c3bff9f07f06243e70b01cb63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72381fa08fdd672973e1c6e175291b3d89a8f375117c23c7efc939fead3f62ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ec168f9dd0b793cb746abec5fd0922cbabf69b71ae55d81c0523be8fac9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95584c308545e3ea96b9d6bd6712f4e8d355baeae70edf62e3499a0e984a13af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2155b0ec1873858607c3d52b287a5533d1bbf707b7a39b4ca93e73177b2b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.383267 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.383333 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.383350 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.383374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.383393 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.402518 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.419577 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.435966 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:20Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.483841 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:20 crc kubenswrapper[4873]: E0219 09:46:20.484001 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.486040 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.486098 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.486143 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.486166 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.486183 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.508863 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:38:51.165205996 +0000 UTC Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.588954 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.589007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.589025 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.589045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.589059 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.691634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.692093 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.692396 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.692537 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.692699 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.796610 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.796955 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.797177 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.797397 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.797551 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.900318 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.900377 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.900400 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.900429 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:20 crc kubenswrapper[4873]: I0219 09:46:20.900450 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:20Z","lastTransitionTime":"2026-02-19T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.003186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.003233 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.003247 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.003266 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.003280 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.105507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.105545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.105555 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.105569 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.105578 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.208316 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.208372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.208390 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.208414 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.208433 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.311800 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.311847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.311864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.311887 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.311903 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.414811 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.414838 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.414847 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.414858 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.414869 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.483714 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.484352 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:21 crc kubenswrapper[4873]: E0219 09:46:21.484619 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.484458 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:21 crc kubenswrapper[4873]: E0219 09:46:21.484733 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:21 crc kubenswrapper[4873]: E0219 09:46:21.484891 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.509539 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:16:57.266972742 +0000 UTC Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.512674 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7760a15-9ea0-42f0-b42b-72de30071d14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:18Z\\\",\\\"message\\\":\\\"alName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 09:46:18.424282 6989 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager-operator/metrics\\\\\\\"}\\\\nF0219 09:46:18.424294 6989 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz7vl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j94bh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.517682 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.517728 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.517743 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.517763 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.517779 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.525213 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbv7k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33fdff17-cdda-468e-8520-7f0937acd8db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e93aee03e84febb69ba4a4a4d76aeb12d03ce9c7e0c00e318241f2286c3e548f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gql7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbv7k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.539401 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6df925a-1654-4ade-a300-97c316b0867f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 09:45:11.226743 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 09:45:11.226899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:45:11.227747 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3099442580/tls.crt::/tmp/serving-cert-3099442580/tls.key\\\\\\\"\\\\nI0219 09:45:11.466170 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 09:45:11.471307 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 09:45:11.471337 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 09:45:11.471373 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 09:45:11.471384 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 09:45:11.491622 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 09:45:11.491660 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491669 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 09:45:11.491679 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 09:45:11.491686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 09:45:11.491692 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 09:45:11.491698 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 09:45:11.491774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 09:45:11.493330 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.555519 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf63b7395a16d736e40432eaf3de6a5fdc32607519a49bd98dc44a1c190682ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.574276 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e9aa67864d205ea3b5abf40cfff62e5f3e5edc119a2871696e283a1e90162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9df67bb7a7bc15eec03cf919cd4ed3f1d3d10f2ed97590a49f6882fbbe8675\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.587502 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.609166 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pp77w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8bbad50-17a6-49b3-aa6a-3d8bcf05f5ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28725576a9ad3dbc36c016d908d30b313f11f977e0cc96d07d94561b473b707b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmt6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pp77w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.619904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.619952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.619965 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.619984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.619998 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.625595 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee7c8560-6ad7-4360-9a1d-3137133ef615\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://610f40cd2f97208fe9d6c9f3accc665ff4d1276e227f799b64e86b02be681a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83285abf13435688b3341f479e81eb617eacf5bb70ac2ea2c9218d9d9d313b9b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.647958 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39a3f6b0-3581-4f36-85f6-24fa414b73f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c15317bf5117d3cfd841bcc7a4475aeafb262813b4aa756a3b18eb4ccb598da3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dbf05928c62c4c7f722f2d1fbd6a2c7bfc3c1306c17dcc32780239860cc8220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4e9759da0e3aeedb86b3d7c95d090eb2326b9febb35ebd7dbe7f78abc2677c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://034d4dad3ade3a0cad5aea021ed97933978f290a04b8b71d38fa565d46a29e9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.664952 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c605647e0eb6483375a17f468fb0a3c0d91778943b69f2b28b8557484c1db7e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.675977 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98d35597-056d-48f0-b599-28b098dd45f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvs9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcp8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.693173 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8712bdb-10cc-4cf7-aa21-7f34761ec6cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37236dcd27e2d6048ee4976e2f411df9eaf77c3bff9f07f06243e70b01cb63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72381fa08fdd672973e1c6e175291b3d89a8f375117c23c7efc939fead3f62ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d61ec168f9dd0b793cb746abec5fd0922cbabf69b71ae55d81c0523be8fac9d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95584c308545e3ea96b9d6bd6712f4e8d355baeae70edf62e3499a0e984a13af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2155b0ec1873858607c3d52b287a5533d1bbf707b7a39b4ca93e73177b2b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd368c8a34e43275736777dadfbca8025018085d520e2d7ec2bac159e23eb7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc75e6f1c2480daa5a86dd017586fdda8ff44989f50d245bb77861ca6f575d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e936da2913c3886d6abaf2e5e1a4b2e9cd3bc3f2d9c613258ac1437ce749673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:44:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:44:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.704921 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.717096 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c61760e-2955-4688-b68b-1ceeda73f356\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0f93c9f80e1dd0497f0fe105c54dc34c55e39f55d781938dcd452eea9de7b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgmwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qmsl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.722987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.723031 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.723045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.723072 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.723084 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.742218 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acb9409d-e5b1-4d32-9200-8dc32d8923d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://566b71024f32af15bc24bee5fc194fca3a17f7b2f998034c0b606befbd75a91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844e6eff3289a73700f5f016ff416dffa2d8fb519cc6daa013a89b77c86a8795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f83e0d0438b8f7f877ec029e26bbb7ac26a28a8b5e4d636f2dc117773afa505\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bab0e336ae80a0a11faa1f31c84c8d64ecdfb1edb4605288dab414b9c6764189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8592389e4b69c42b9025726509f03f6864467be724f61cd8a949e468efb8a9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34000d67cc74d05b2123daa28bd03f66cb5e5649d0b22e3e15eb035956d146a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5be40215c68ae53a4516ce126de7e3a6bf897ca543f0dc39bbadd4c3d0fb0d39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:45:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gr6v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2lwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.756799 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"566305a3-ea47-4e60-b247-5b32fa8544e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96a5934abcf2b7d61e3ecde971dba8329b4dfb5565d3e926106cf023650a86d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc5e33551a9c1389c85f629b887464d293d457f3206e462c8758a01fbaa6dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpvr5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t7gjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.772472 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7cd1737-26f8-469d-a081-673330afcc5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0891943575e34f299ce1c864c89d88cf0653fc114ccddd336460482a1ed05b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a18034acac9f1de3bcd5002e36a77980bc97f6545ec6ad50d2101b7dff95b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a02c6fa1232d8b067f89cdfc3ceded8077483d001343a7c8a7aeacb732627f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:44:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:44:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.788947 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.807132 4873 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4pk8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1ae3d8d-27cf-489f-a6ba-ef914db74bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:45:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:46:04Z\\\",\\\"message\\\":\\\"2026-02-19T09:45:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46\\\\n2026-02-19T09:45:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_00b4ab66-db85-4e00-bc50-f10b22269f46 to /host/opt/cni/bin/\\\\n2026-02-19T09:45:19Z [verbose] multus-daemon started\\\\n2026-02-19T09:45:19Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:46:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:45:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnjnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:45:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4pk8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:46:21Z is after 2025-08-24T17:21:41Z" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.826226 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.826291 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.826316 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.826347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.826367 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.929890 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.929953 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.929973 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.930000 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:21 crc kubenswrapper[4873]: I0219 09:46:21.930019 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:21Z","lastTransitionTime":"2026-02-19T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.033347 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.033407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.033425 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.033450 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.033467 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.137191 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.137256 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.137279 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.137307 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.137327 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.240884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.240967 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.240986 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.241011 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.241031 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.343372 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.343420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.343455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.343473 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.343485 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.446261 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.446337 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.446362 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.446393 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.446416 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.484169 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:22 crc kubenswrapper[4873]: E0219 09:46:22.484353 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.509913 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:59:40.704236036 +0000 UTC Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.549865 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.550009 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.550033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.550062 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.550086 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.653305 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.653355 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.653369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.653387 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.653399 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.755881 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.755935 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.755952 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.755977 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.755994 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.859252 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.859308 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.859329 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.859354 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.859372 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.962306 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.962365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.962383 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.962407 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:22 crc kubenswrapper[4873]: I0219 09:46:22.962429 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:22Z","lastTransitionTime":"2026-02-19T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.065022 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.065085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.065130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.065155 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.065179 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.167750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.167809 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.167835 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.167867 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.167890 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.271577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.271636 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.271653 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.271676 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.271695 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.375085 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.375186 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.375212 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.375245 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.375266 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.478156 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.478207 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.478220 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.478235 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.478249 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.483740 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.483874 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:23 crc kubenswrapper[4873]: E0219 09:46:23.483914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.483980 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:23 crc kubenswrapper[4873]: E0219 09:46:23.484091 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:23 crc kubenswrapper[4873]: E0219 09:46:23.484225 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.510858 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:14:47.536313976 +0000 UTC Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.581294 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.581369 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.581389 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.581415 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.581437 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.685365 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.685455 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.685475 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.685501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.685518 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.788577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.788634 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.788652 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.788675 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.788692 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.891884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.891948 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.891966 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.891991 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.892009 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.995806 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.995864 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.995884 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.995908 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:23 crc kubenswrapper[4873]: I0219 09:46:23.995926 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:23Z","lastTransitionTime":"2026-02-19T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.098995 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.099061 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.099082 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.099130 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.099148 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.202145 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.202225 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.202248 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.202275 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.202293 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.305891 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.305997 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.306017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.306041 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.306079 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.408996 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.409139 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.409160 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.409184 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.409200 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.483940 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:24 crc kubenswrapper[4873]: E0219 09:46:24.484162 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.511391 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:48:27.523658403 +0000 UTC Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.512420 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.512492 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.512516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.512545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.512571 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.616381 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.616458 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.616479 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.616959 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.617015 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.720515 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.720597 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.720621 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.720649 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.720670 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.823595 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.823656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.823674 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.823699 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.823722 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.926670 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.926734 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.926750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.926778 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:24 crc kubenswrapper[4873]: I0219 09:46:24.926801 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:24Z","lastTransitionTime":"2026-02-19T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.029161 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.029270 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.029290 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.029313 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.029330 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.132507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.132571 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.132589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.132613 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.132629 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.235709 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.235779 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.235797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.235827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.235849 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.338505 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.338561 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.338577 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.338603 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.338621 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.442216 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.442284 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.442301 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.442324 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.442341 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.483756 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.483841 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:25 crc kubenswrapper[4873]: E0219 09:46:25.483914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:25 crc kubenswrapper[4873]: E0219 09:46:25.484026 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.484149 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:25 crc kubenswrapper[4873]: E0219 09:46:25.484254 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.548471 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:41:48.728004914 +0000 UTC Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.550516 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.550564 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.550580 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.550601 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.550618 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.653740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.653807 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.653827 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.653854 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.653873 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.756916 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.756984 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.757001 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.757028 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.757046 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.860550 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.860627 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.860645 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.860671 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.860689 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.963340 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.963419 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.963445 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.963476 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:25 crc kubenswrapper[4873]: I0219 09:46:25.963500 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:25Z","lastTransitionTime":"2026-02-19T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.066700 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.066775 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.066797 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.066826 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.066848 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.170149 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.170205 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.170223 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.170249 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.170266 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.274017 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.274657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.274816 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.274951 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.275077 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.378740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.378853 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.378871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.378897 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.378914 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.482007 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.482077 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.482142 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.482180 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.482203 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.483286 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:26 crc kubenswrapper[4873]: E0219 09:46:26.483647 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.549257 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:55:49.251107807 +0000 UTC Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.584320 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.584374 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.584392 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.584416 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.584435 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.686678 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.686729 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.686740 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.686761 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.686775 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.789750 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.789819 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.789839 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.789863 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.789880 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.893845 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.893904 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.893921 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.893946 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.893964 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.996924 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.996987 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.997008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.997038 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:26 crc kubenswrapper[4873]: I0219 09:46:26.997061 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:26Z","lastTransitionTime":"2026-02-19T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.100436 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.100501 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.100521 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.100547 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.100569 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.204242 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.204326 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.204351 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.204376 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.204395 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.306928 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.307081 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.307136 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.307164 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.307182 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.410723 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.410785 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.410805 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.410830 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.410848 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.484082 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.484151 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.484194 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:27 crc kubenswrapper[4873]: E0219 09:46:27.484321 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:27 crc kubenswrapper[4873]: E0219 09:46:27.484435 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:27 crc kubenswrapper[4873]: E0219 09:46:27.484668 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.513385 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.513442 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.513462 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.513486 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.513504 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.549746 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:22:53.423393531 +0000 UTC Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.616225 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.616279 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.616297 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.616321 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.616338 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.719553 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.719630 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.719656 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.719686 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.719707 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.823386 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.823488 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.823507 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.823537 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.823555 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.926354 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.926422 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.926440 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.926465 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:27 crc kubenswrapper[4873]: I0219 09:46:27.926482 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:27Z","lastTransitionTime":"2026-02-19T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.029055 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.029167 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.029199 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.029230 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.029251 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.132482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.132565 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.132588 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.132614 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.132632 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.236482 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.236551 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.236570 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.236596 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.236614 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.339680 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.339738 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.339756 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.339786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.339810 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.442624 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.443036 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.443238 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.443545 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.443745 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.483506 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:28 crc kubenswrapper[4873]: E0219 09:46:28.483850 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.547929 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.547990 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.548008 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.548033 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.548050 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.550523 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:02:11.473186163 +0000 UTC Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.651983 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.652047 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.652065 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.652171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.652192 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.755524 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.755589 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.755612 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.755640 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.755663 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.858278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.858556 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.858685 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.858786 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.858877 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.961593 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.961645 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.961657 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.961681 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:28 crc kubenswrapper[4873]: I0219 09:46:28.961696 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:28Z","lastTransitionTime":"2026-02-19T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.064663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.065171 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.065342 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.065487 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.065617 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:29Z","lastTransitionTime":"2026-02-19T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.168871 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.169262 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.169426 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.169652 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.169807 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:29Z","lastTransitionTime":"2026-02-19T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.273578 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.273642 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.273663 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.273687 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.273707 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:29Z","lastTransitionTime":"2026-02-19T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.317190 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.317256 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.317278 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.317307 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.317329 4873 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:46:29Z","lastTransitionTime":"2026-02-19T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.387331 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx"] Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.387714 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.392553 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.392570 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.392673 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.392671 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.443410 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.443379186 podStartE2EDuration="15.443379186s" podCreationTimestamp="2026-02-19 09:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.442909835 +0000 UTC m=+98.732341553" watchObservedRunningTime="2026-02-19 09:46:29.443379186 +0000 UTC m=+98.732810864" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.485841 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.486218 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0885c13-02f8-4892-8f84-bcb38f36cfe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.485875 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.485966 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.486281 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.486393 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.486425 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0885c13-02f8-4892-8f84-bcb38f36cfe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.486500 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0885c13-02f8-4892-8f84-bcb38f36cfe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: E0219 09:46:29.486954 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:29 crc kubenswrapper[4873]: E0219 09:46:29.487083 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:29 crc kubenswrapper[4873]: E0219 09:46:29.487259 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.512947 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podStartSLOduration=72.512897586 podStartE2EDuration="1m12.512897586s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.485421557 +0000 UTC m=+98.774853245" watchObservedRunningTime="2026-02-19 09:46:29.512897586 +0000 UTC m=+98.802329264" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.529341 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.52930999 podStartE2EDuration="1m15.52930999s" podCreationTimestamp="2026-02-19 09:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.527390017 +0000 UTC m=+98.816821705" watchObservedRunningTime="2026-02-19 09:46:29.52930999 +0000 UTC m=+98.818741678" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.551128 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:57:03.697731662 +0000 UTC Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.551243 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.560450 4873 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587150 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0885c13-02f8-4892-8f84-bcb38f36cfe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587197 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0885c13-02f8-4892-8f84-bcb38f36cfe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587218 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587245 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587261 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0885c13-02f8-4892-8f84-bcb38f36cfe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587389 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.587430 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c0885c13-02f8-4892-8f84-bcb38f36cfe8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.588202 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0885c13-02f8-4892-8f84-bcb38f36cfe8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.596483 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0885c13-02f8-4892-8f84-bcb38f36cfe8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.598708 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4pk8x" podStartSLOduration=72.598690487 podStartE2EDuration="1m12.598690487s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.569683284 +0000 UTC m=+98.859114962" watchObservedRunningTime="2026-02-19 09:46:29.598690487 +0000 UTC m=+98.888122155" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.613565 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c0885c13-02f8-4892-8f84-bcb38f36cfe8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wdlmx\" (UID: \"c0885c13-02f8-4892-8f84-bcb38f36cfe8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.620612 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n2lwn" podStartSLOduration=72.620590802 podStartE2EDuration="1m12.620590802s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.599665589 +0000 UTC m=+98.889097237" watchObservedRunningTime="2026-02-19 09:46:29.620590802 +0000 UTC m=+98.910022450" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.621712 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t7gjb" podStartSLOduration=72.621697607 podStartE2EDuration="1m12.621697607s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.621209396 +0000 UTC m=+98.910641074" watchObservedRunningTime="2026-02-19 09:46:29.621697607 +0000 UTC m=+98.911129255" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.638190 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.638171892 podStartE2EDuration="1m18.638171892s" podCreationTimestamp="2026-02-19 09:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.637883186 +0000 UTC m=+98.927314834" watchObservedRunningTime="2026-02-19 09:46:29.638171892 +0000 UTC m=+98.927603520" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.709491 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.725513 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kbv7k" podStartSLOduration=72.725495467 podStartE2EDuration="1m12.725495467s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.707238412 +0000 UTC m=+98.996670060" watchObservedRunningTime="2026-02-19 09:46:29.725495467 +0000 UTC m=+99.014927115" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.743984 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.743950576 podStartE2EDuration="21.743950576s" podCreationTimestamp="2026-02-19 09:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.741833969 +0000 UTC m=+99.031265647" watchObservedRunningTime="2026-02-19 09:46:29.743950576 +0000 UTC m=+99.033382234" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.761827 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.761806052 podStartE2EDuration="43.761806052s" podCreationTimestamp="2026-02-19 09:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.760353079 +0000 UTC m=+99.049784757" watchObservedRunningTime="2026-02-19 09:46:29.761806052 +0000 UTC m=+99.051237690" Feb 19 09:46:29 crc kubenswrapper[4873]: I0219 09:46:29.810021 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pp77w" podStartSLOduration=72.81000334 podStartE2EDuration="1m12.81000334s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:29.809439517 +0000 UTC m=+99.098871165" watchObservedRunningTime="2026-02-19 09:46:29.81000334 +0000 UTC m=+99.099434978" Feb 19 09:46:30 crc kubenswrapper[4873]: I0219 09:46:30.054250 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" event={"ID":"c0885c13-02f8-4892-8f84-bcb38f36cfe8","Type":"ContainerStarted","Data":"256e3d62682ab17bf2f38c8eed08c57aa7dbf1ea131685badc363bc76215eb08"} Feb 19 09:46:30 crc kubenswrapper[4873]: I0219 09:46:30.054335 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" event={"ID":"c0885c13-02f8-4892-8f84-bcb38f36cfe8","Type":"ContainerStarted","Data":"034d90cb62e1ad92539f0b2283896614df219d6eb9bf6d5ebfee1928b31f365d"} Feb 19 09:46:30 crc kubenswrapper[4873]: I0219 09:46:30.075327 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wdlmx" podStartSLOduration=73.075298728 podStartE2EDuration="1m13.075298728s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:30.073549429 +0000 UTC m=+99.362981117" watchObservedRunningTime="2026-02-19 09:46:30.075298728 +0000 UTC m=+99.364730396" Feb 19 09:46:30 crc kubenswrapper[4873]: I0219 09:46:30.483632 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:30 crc kubenswrapper[4873]: E0219 09:46:30.483857 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:31 crc kubenswrapper[4873]: I0219 09:46:31.483658 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:31 crc kubenswrapper[4873]: E0219 09:46:31.485559 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:31 crc kubenswrapper[4873]: I0219 09:46:31.485676 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:31 crc kubenswrapper[4873]: I0219 09:46:31.485773 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:31 crc kubenswrapper[4873]: E0219 09:46:31.486024 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:31 crc kubenswrapper[4873]: E0219 09:46:31.486130 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:32 crc kubenswrapper[4873]: I0219 09:46:32.483284 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:32 crc kubenswrapper[4873]: E0219 09:46:32.483648 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:33 crc kubenswrapper[4873]: I0219 09:46:33.484065 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:33 crc kubenswrapper[4873]: I0219 09:46:33.484189 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:33 crc kubenswrapper[4873]: I0219 09:46:33.484099 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:33 crc kubenswrapper[4873]: E0219 09:46:33.484510 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:33 crc kubenswrapper[4873]: E0219 09:46:33.484789 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:33 crc kubenswrapper[4873]: E0219 09:46:33.484914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:34 crc kubenswrapper[4873]: I0219 09:46:34.483465 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:34 crc kubenswrapper[4873]: E0219 09:46:34.483623 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:34 crc kubenswrapper[4873]: I0219 09:46:34.484210 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:46:34 crc kubenswrapper[4873]: E0219 09:46:34.484372 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:46:35 crc kubenswrapper[4873]: I0219 09:46:35.484023 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:35 crc kubenswrapper[4873]: I0219 09:46:35.484196 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:35 crc kubenswrapper[4873]: I0219 09:46:35.484201 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:35 crc kubenswrapper[4873]: E0219 09:46:35.484368 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:35 crc kubenswrapper[4873]: E0219 09:46:35.484514 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:35 crc kubenswrapper[4873]: E0219 09:46:35.484640 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:35 crc kubenswrapper[4873]: I0219 09:46:35.858699 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:35 crc kubenswrapper[4873]: E0219 09:46:35.858913 4873 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:46:35 crc kubenswrapper[4873]: E0219 09:46:35.859450 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs podName:98d35597-056d-48f0-b599-28b098dd45f3 nodeName:}" failed. No retries permitted until 2026-02-19 09:47:39.859417676 +0000 UTC m=+169.148849354 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs") pod "network-metrics-daemon-lcp8k" (UID: "98d35597-056d-48f0-b599-28b098dd45f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:46:36 crc kubenswrapper[4873]: I0219 09:46:36.483811 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:36 crc kubenswrapper[4873]: E0219 09:46:36.483977 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:37 crc kubenswrapper[4873]: I0219 09:46:37.483715 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:37 crc kubenswrapper[4873]: I0219 09:46:37.483757 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:37 crc kubenswrapper[4873]: I0219 09:46:37.483767 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:37 crc kubenswrapper[4873]: E0219 09:46:37.484722 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:37 crc kubenswrapper[4873]: E0219 09:46:37.484762 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:37 crc kubenswrapper[4873]: E0219 09:46:37.484572 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:38 crc kubenswrapper[4873]: I0219 09:46:38.483419 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:38 crc kubenswrapper[4873]: E0219 09:46:38.483589 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:39 crc kubenswrapper[4873]: I0219 09:46:39.483560 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:39 crc kubenswrapper[4873]: I0219 09:46:39.483637 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:39 crc kubenswrapper[4873]: E0219 09:46:39.484082 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:39 crc kubenswrapper[4873]: I0219 09:46:39.483638 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:39 crc kubenswrapper[4873]: E0219 09:46:39.484256 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:39 crc kubenswrapper[4873]: E0219 09:46:39.484338 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:40 crc kubenswrapper[4873]: I0219 09:46:40.483331 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:40 crc kubenswrapper[4873]: E0219 09:46:40.483508 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:41 crc kubenswrapper[4873]: I0219 09:46:41.483913 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:41 crc kubenswrapper[4873]: I0219 09:46:41.484137 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:41 crc kubenswrapper[4873]: I0219 09:46:41.484198 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:41 crc kubenswrapper[4873]: E0219 09:46:41.486059 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:41 crc kubenswrapper[4873]: E0219 09:46:41.486295 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:41 crc kubenswrapper[4873]: E0219 09:46:41.486396 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:42 crc kubenswrapper[4873]: I0219 09:46:42.484071 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:42 crc kubenswrapper[4873]: E0219 09:46:42.484321 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:43 crc kubenswrapper[4873]: I0219 09:46:43.484020 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:43 crc kubenswrapper[4873]: I0219 09:46:43.484029 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:43 crc kubenswrapper[4873]: I0219 09:46:43.484167 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:43 crc kubenswrapper[4873]: E0219 09:46:43.484487 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:43 crc kubenswrapper[4873]: E0219 09:46:43.484648 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:43 crc kubenswrapper[4873]: E0219 09:46:43.484830 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:44 crc kubenswrapper[4873]: I0219 09:46:44.483723 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:44 crc kubenswrapper[4873]: E0219 09:46:44.483883 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:45 crc kubenswrapper[4873]: I0219 09:46:45.483640 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:45 crc kubenswrapper[4873]: I0219 09:46:45.483692 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:45 crc kubenswrapper[4873]: I0219 09:46:45.483659 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:45 crc kubenswrapper[4873]: E0219 09:46:45.483827 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:45 crc kubenswrapper[4873]: E0219 09:46:45.484182 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:45 crc kubenswrapper[4873]: E0219 09:46:45.484020 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:46 crc kubenswrapper[4873]: I0219 09:46:46.483941 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:46 crc kubenswrapper[4873]: E0219 09:46:46.484086 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:47 crc kubenswrapper[4873]: I0219 09:46:47.483254 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:47 crc kubenswrapper[4873]: I0219 09:46:47.483268 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:47 crc kubenswrapper[4873]: I0219 09:46:47.483388 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:47 crc kubenswrapper[4873]: E0219 09:46:47.483511 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:47 crc kubenswrapper[4873]: E0219 09:46:47.483659 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:47 crc kubenswrapper[4873]: E0219 09:46:47.484181 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:47 crc kubenswrapper[4873]: I0219 09:46:47.484683 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:46:47 crc kubenswrapper[4873]: E0219 09:46:47.485004 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j94bh_openshift-ovn-kubernetes(a7760a15-9ea0-42f0-b42b-72de30071d14)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" Feb 19 09:46:48 crc kubenswrapper[4873]: I0219 09:46:48.483358 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:48 crc kubenswrapper[4873]: E0219 09:46:48.483517 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:49 crc kubenswrapper[4873]: I0219 09:46:49.483439 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:49 crc kubenswrapper[4873]: I0219 09:46:49.483470 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:49 crc kubenswrapper[4873]: I0219 09:46:49.483527 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:49 crc kubenswrapper[4873]: E0219 09:46:49.484259 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:49 crc kubenswrapper[4873]: E0219 09:46:49.484369 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:49 crc kubenswrapper[4873]: E0219 09:46:49.484444 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:50 crc kubenswrapper[4873]: I0219 09:46:50.483593 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:50 crc kubenswrapper[4873]: E0219 09:46:50.483722 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.129747 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/1.log" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.130556 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/0.log" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.130610 4873 generic.go:334] "Generic (PLEG): container finished" podID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" containerID="81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc" exitCode=1 Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.130646 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerDied","Data":"81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc"} Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.130685 4873 scope.go:117] "RemoveContainer" containerID="6e5a838a826471a87367bdcbb5dc9b7586d8c2a751ffc5431bab3269e1a0b003" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.134803 4873 scope.go:117] "RemoveContainer" containerID="81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.137484 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4pk8x_openshift-multus(e1ae3d8d-27cf-489f-a6ba-ef914db74bff)\"" pod="openshift-multus/multus-4pk8x" podUID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.462210 4873 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.484167 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.484198 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:51 crc kubenswrapper[4873]: I0219 09:46:51.484498 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.485846 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.486024 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.486190 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:51 crc kubenswrapper[4873]: E0219 09:46:51.601395 4873 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:46:52 crc kubenswrapper[4873]: I0219 09:46:52.137607 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/1.log" Feb 19 09:46:52 crc kubenswrapper[4873]: I0219 09:46:52.483926 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:52 crc kubenswrapper[4873]: E0219 09:46:52.484137 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:53 crc kubenswrapper[4873]: I0219 09:46:53.484270 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:53 crc kubenswrapper[4873]: I0219 09:46:53.484287 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:53 crc kubenswrapper[4873]: E0219 09:46:53.484431 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:53 crc kubenswrapper[4873]: I0219 09:46:53.484571 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:53 crc kubenswrapper[4873]: E0219 09:46:53.484630 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:53 crc kubenswrapper[4873]: E0219 09:46:53.484783 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:54 crc kubenswrapper[4873]: I0219 09:46:54.484387 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:54 crc kubenswrapper[4873]: E0219 09:46:54.484542 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:55 crc kubenswrapper[4873]: I0219 09:46:55.483372 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:55 crc kubenswrapper[4873]: I0219 09:46:55.483428 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:55 crc kubenswrapper[4873]: E0219 09:46:55.483670 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:55 crc kubenswrapper[4873]: I0219 09:46:55.483817 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:55 crc kubenswrapper[4873]: E0219 09:46:55.484098 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:55 crc kubenswrapper[4873]: E0219 09:46:55.484638 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:56 crc kubenswrapper[4873]: I0219 09:46:56.483483 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:56 crc kubenswrapper[4873]: E0219 09:46:56.483622 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:56 crc kubenswrapper[4873]: E0219 09:46:56.602787 4873 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:46:57 crc kubenswrapper[4873]: I0219 09:46:57.483927 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:57 crc kubenswrapper[4873]: I0219 09:46:57.483937 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:57 crc kubenswrapper[4873]: I0219 09:46:57.484163 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:57 crc kubenswrapper[4873]: E0219 09:46:57.484354 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:57 crc kubenswrapper[4873]: E0219 09:46:57.484446 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:57 crc kubenswrapper[4873]: E0219 09:46:57.484574 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:58 crc kubenswrapper[4873]: I0219 09:46:58.484746 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:46:58 crc kubenswrapper[4873]: E0219 09:46:58.484858 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:46:58 crc kubenswrapper[4873]: I0219 09:46:58.485426 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.164910 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/3.log" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.167255 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerStarted","Data":"fbe398acea08ecbb128c7f23474abd3c929b29591afd83ce34befc3c628c7ddb"} Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.167768 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.202394 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podStartSLOduration=102.202369799 podStartE2EDuration="1m42.202369799s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:59.201715865 +0000 UTC m=+128.491147523" watchObservedRunningTime="2026-02-19 09:46:59.202369799 +0000 UTC m=+128.491801457" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.484140 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.484211 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.484159 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:46:59 crc kubenswrapper[4873]: E0219 09:46:59.484317 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:46:59 crc kubenswrapper[4873]: E0219 09:46:59.484410 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:46:59 crc kubenswrapper[4873]: E0219 09:46:59.484609 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:46:59 crc kubenswrapper[4873]: I0219 09:46:59.528902 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lcp8k"] Feb 19 09:47:00 crc kubenswrapper[4873]: I0219 09:47:00.169621 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:00 crc kubenswrapper[4873]: E0219 09:47:00.169721 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:47:00 crc kubenswrapper[4873]: I0219 09:47:00.483976 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:00 crc kubenswrapper[4873]: E0219 09:47:00.484223 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:47:01 crc kubenswrapper[4873]: I0219 09:47:01.483480 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:01 crc kubenswrapper[4873]: I0219 09:47:01.483507 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:01 crc kubenswrapper[4873]: I0219 09:47:01.485161 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:01 crc kubenswrapper[4873]: E0219 09:47:01.485155 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:47:01 crc kubenswrapper[4873]: E0219 09:47:01.485333 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:47:01 crc kubenswrapper[4873]: E0219 09:47:01.485454 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:47:01 crc kubenswrapper[4873]: E0219 09:47:01.604197 4873 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:47:02 crc kubenswrapper[4873]: I0219 09:47:02.483792 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:02 crc kubenswrapper[4873]: E0219 09:47:02.485030 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:47:02 crc kubenswrapper[4873]: I0219 09:47:02.484334 4873 scope.go:117] "RemoveContainer" containerID="81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc" Feb 19 09:47:03 crc kubenswrapper[4873]: I0219 09:47:03.187515 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/1.log" Feb 19 09:47:03 crc kubenswrapper[4873]: I0219 09:47:03.187889 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerStarted","Data":"ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2"} Feb 19 09:47:03 crc kubenswrapper[4873]: I0219 09:47:03.483488 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:03 crc kubenswrapper[4873]: I0219 09:47:03.483509 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:03 crc kubenswrapper[4873]: E0219 09:47:03.484023 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:47:03 crc kubenswrapper[4873]: I0219 09:47:03.483603 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:03 crc kubenswrapper[4873]: E0219 09:47:03.484259 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:47:03 crc kubenswrapper[4873]: E0219 09:47:03.484986 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:47:04 crc kubenswrapper[4873]: I0219 09:47:04.483406 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:04 crc kubenswrapper[4873]: E0219 09:47:04.483624 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:47:05 crc kubenswrapper[4873]: I0219 09:47:05.484082 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:05 crc kubenswrapper[4873]: I0219 09:47:05.484177 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:05 crc kubenswrapper[4873]: I0219 09:47:05.484126 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:05 crc kubenswrapper[4873]: E0219 09:47:05.484885 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcp8k" podUID="98d35597-056d-48f0-b599-28b098dd45f3" Feb 19 09:47:05 crc kubenswrapper[4873]: E0219 09:47:05.485148 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:47:05 crc kubenswrapper[4873]: E0219 09:47:05.485838 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:47:06 crc kubenswrapper[4873]: I0219 09:47:06.483419 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:06 crc kubenswrapper[4873]: E0219 09:47:06.483603 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.484164 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.484265 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.485139 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.493958 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.495381 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.495506 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.495795 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.496046 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 09:47:07 crc kubenswrapper[4873]: I0219 09:47:07.496058 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 09:47:08 crc kubenswrapper[4873]: I0219 09:47:08.483996 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.480045 4873 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.547273 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k627b"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.547780 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553001 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553079 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553213 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553345 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553410 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553516 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553621 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.553725 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxcz7"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.554588 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.557350 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.557457 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558150 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558276 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558315 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558548 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558567 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.558818 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.559723 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.559742 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.559850 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.562179 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.562721 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.562889 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.565268 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.565641 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.571705 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.572537 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.572623 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.572627 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.572855 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.574223 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.575016 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.575802 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.576026 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.575815 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577095 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577141 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577155 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577346 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577368 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577428 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577432 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577440 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577585 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577611 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577694 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577867 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.577988 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.578132 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.578425 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jsc24"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.578896 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.579245 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.579589 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.579721 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.579817 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.579912 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580050 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580062 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580167 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580300 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580420 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580699 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.580798 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.581566 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.581727 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.581994 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gbzll"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.586415 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.587001 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.587767 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.588247 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.593175 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.593809 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.594435 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.594478 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.605281 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.605561 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.605720 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.605984 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.612505 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.613664 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.613699 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.614153 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.615296 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.615525 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.615301 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.616356 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.616808 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.621337 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.621571 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.643557 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.643805 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxcz7"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.643850 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.643875 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644163 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644291 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644391 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644495 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644610 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644713 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644868 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644900 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.644952 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.645017 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.645036 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.645135 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.646581 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.647010 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.647995 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.648330 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.648714 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.649369 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.649879 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.650392 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.650498 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.651297 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9pq25"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.651648 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.652053 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.652431 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.655065 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656075 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2d87932-1993-464d-b3d2-71025526e1f2-serving-cert\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656168 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656199 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656223 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656250 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csvqh\" (UniqueName: \"kubernetes.io/projected/b2d87932-1993-464d-b3d2-71025526e1f2-kube-api-access-csvqh\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656292 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656315 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656336 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656422 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656447 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxsx\" (UniqueName: \"kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656475 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656495 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbm8s\" (UniqueName: \"kubernetes.io/projected/df659e7d-39ab-41ee-8df5-08896976666c-kube-api-access-tbm8s\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656520 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-client\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656541 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656561 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656587 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5198f9e2-ae27-4804-ab74-0759a5217d89-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656607 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-audit-dir\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656627 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656679 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-auth-proxy-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656730 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656751 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjkk\" (UniqueName: \"kubernetes.io/projected/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-kube-api-access-ltjkk\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656776 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656800 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-images\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656823 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-config\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656849 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5198f9e2-ae27-4804-ab74-0759a5217d89-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656872 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656893 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-config\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656921 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-serving-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656955 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-policies\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.656983 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657017 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657039 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-client\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657080 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-config\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657121 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657151 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657172 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-encryption-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657196 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4sz\" (UniqueName: \"kubernetes.io/projected/5198f9e2-ae27-4804-ab74-0759a5217d89-kube-api-access-gd4sz\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657219 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657245 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657269 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df659e7d-39ab-41ee-8df5-08896976666c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657431 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-service-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657505 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkhf\" (UniqueName: \"kubernetes.io/projected/595c8db4-733e-4729-aa34-8be7307043a8-kube-api-access-6jkhf\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657549 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-dir\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657587 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-serving-cert\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657613 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-encryption-config\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657653 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-trusted-ca\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657677 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657708 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-node-pullsecrets\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657745 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-audit\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657765 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r54wb\" (UniqueName: \"kubernetes.io/projected/5968ec26-dea6-4e79-99b1-5954e173d226-kube-api-access-r54wb\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657790 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-machine-approver-tls\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657810 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595c8db4-733e-4729-aa34-8be7307043a8-serving-cert\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657834 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfvdj\" (UniqueName: \"kubernetes.io/projected/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-kube-api-access-nfvdj\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657889 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-image-import-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657916 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657937 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4wz\" (UniqueName: \"kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657966 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9hc\" (UniqueName: \"kubernetes.io/projected/9e9b2e26-976d-498c-88d8-dbddd520c9bf-kube-api-access-5l9hc\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.657986 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-serving-cert\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.658011 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.658033 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmb7k\" (UniqueName: \"kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.661946 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.662461 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.662889 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.663595 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.664013 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.664152 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.665302 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.665310 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.665636 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.665988 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.666499 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lkp4m"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.666553 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.666657 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.666752 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.666836 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.667044 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.667400 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.672163 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.672653 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.672728 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.672991 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.674284 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.682494 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.682932 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kzpbf"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.683774 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.684792 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.686844 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.691419 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.691954 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.691265 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.694339 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.694617 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.695235 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.697524 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.697567 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.698469 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.700399 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.701163 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.703527 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.711527 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmrn5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.711917 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.711935 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.712218 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.712436 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.712566 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.712649 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.713143 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.713457 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.713851 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tjxkj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.714274 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.714324 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.714381 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.714555 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.714885 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.716961 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.717996 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.718347 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.718687 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.718696 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.718722 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.719001 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.720717 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.720901 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.721806 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.722748 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.725680 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.726668 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2798g"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.727543 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.727939 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vklwp"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.729605 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.729633 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.729729 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.729732 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.730548 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.731727 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gbzll"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.732929 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.734228 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.735591 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.736932 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.738749 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9pq25"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.740052 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k627b"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.741823 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tjxkj"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.742398 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.744088 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.745212 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.747003 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.748262 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lkp4m"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.749349 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.750977 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.757537 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759126 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-policies\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759160 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-default-certificate\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqpt\" (UniqueName: \"kubernetes.io/projected/3c4f7134-312f-4f1d-a344-80d44d65c371-kube-api-access-5lqpt\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759220 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759241 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759263 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrbb\" (UniqueName: \"kubernetes.io/projected/57d54c43-611a-40f1-b05e-9a0007dbe3ec-kube-api-access-hwrbb\") pod \"migrator-59844c95c7-5tt6k\" (UID: \"57d54c43-611a-40f1-b05e-9a0007dbe3ec\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759293 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d639ff25-343e-4e7c-bd2e-f5fc533923f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759315 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759337 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759376 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-client\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759396 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-config\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759416 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759438 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fz4c\" (UniqueName: \"kubernetes.io/projected/d639ff25-343e-4e7c-bd2e-f5fc533923f4-kube-api-access-2fz4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759461 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759479 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-encryption-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759499 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759519 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34f3caca-1b4c-493d-a10b-277b42d7ce72-service-ca-bundle\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759541 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad63def1-18c4-4841-a936-b7c7e42ce092-config\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759562 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4sz\" (UniqueName: \"kubernetes.io/projected/5198f9e2-ae27-4804-ab74-0759a5217d89-kube-api-access-gd4sz\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759584 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759605 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759627 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df659e7d-39ab-41ee-8df5-08896976666c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759649 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhg7\" (UniqueName: \"kubernetes.io/projected/5288b888-1b48-4590-8d10-f3688ba87a41-kube-api-access-rzhg7\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759669 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4f7134-312f-4f1d-a344-80d44d65c371-serving-cert\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759689 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759704 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-policies\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759710 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-service-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759730 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkhf\" (UniqueName: \"kubernetes.io/projected/595c8db4-733e-4729-aa34-8be7307043a8-kube-api-access-6jkhf\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759750 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-dir\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759772 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwks\" (UniqueName: \"kubernetes.io/projected/12ef881d-885a-4215-bd57-27966cb209b8-kube-api-access-fpwks\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759792 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48911b55-fb42-412b-9298-4cba1105a164-config\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759811 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhkq\" (UniqueName: \"kubernetes.io/projected/34f3caca-1b4c-493d-a10b-277b42d7ce72-kube-api-access-8rhkq\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759830 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-serving-cert\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759847 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-encryption-config\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759864 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-audit\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759882 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-trusted-ca\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759900 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759920 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79bb3a49-346f-49b7-bb8e-c358105f8035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759939 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-node-pullsecrets\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759960 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r54wb\" (UniqueName: \"kubernetes.io/projected/5968ec26-dea6-4e79-99b1-5954e173d226-kube-api-access-r54wb\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-machine-approver-tls\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.759995 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595c8db4-733e-4729-aa34-8be7307043a8-serving-cert\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760016 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfvdj\" (UniqueName: \"kubernetes.io/projected/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-kube-api-access-nfvdj\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760035 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48911b55-fb42-412b-9298-4cba1105a164-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760055 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760076 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5288b888-1b48-4590-8d10-f3688ba87a41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760097 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760135 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-image-import-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760155 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9hc\" (UniqueName: \"kubernetes.io/projected/9e9b2e26-976d-498c-88d8-dbddd520c9bf-kube-api-access-5l9hc\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760174 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4wz\" (UniqueName: \"kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760196 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgcd\" (UniqueName: \"kubernetes.io/projected/6f60efd0-54f5-43eb-b824-f8eaa836df60-kube-api-access-rkgcd\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760219 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmb7k\" (UniqueName: \"kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760239 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-serving-cert\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760259 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760280 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760300 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2d87932-1993-464d-b3d2-71025526e1f2-serving-cert\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760330 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760350 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760368 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-srv-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760388 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760410 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csvqh\" (UniqueName: \"kubernetes.io/projected/b2d87932-1993-464d-b3d2-71025526e1f2-kube-api-access-csvqh\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760435 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760470 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760489 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760509 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12ef881d-885a-4215-bd57-27966cb209b8-proxy-tls\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760528 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b881e81-67ed-4c33-a992-da59d7996b9d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760549 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760568 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760588 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760608 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760631 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48911b55-fb42-412b-9298-4cba1105a164-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760651 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff89g\" (UniqueName: \"kubernetes.io/projected/4b881e81-67ed-4c33-a992-da59d7996b9d-kube-api-access-ff89g\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760674 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxsx\" (UniqueName: \"kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760694 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760716 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-client\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760736 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbm8s\" (UniqueName: \"kubernetes.io/projected/df659e7d-39ab-41ee-8df5-08896976666c-kube-api-access-tbm8s\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760788 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760812 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760831 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760853 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgn9\" (UniqueName: \"kubernetes.io/projected/ad63def1-18c4-4841-a936-b7c7e42ce092-kube-api-access-ldgn9\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760872 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ggch\" (UniqueName: \"kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760891 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79bb3a49-346f-49b7-bb8e-c358105f8035-config\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760914 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f60efd0-54f5-43eb-b824-f8eaa836df60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760936 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5198f9e2-ae27-4804-ab74-0759a5217d89-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760955 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-audit-dir\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760975 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.760995 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761026 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-auth-proxy-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761045 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761064 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjkk\" (UniqueName: \"kubernetes.io/projected/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-kube-api-access-ltjkk\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761085 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79bb3a49-346f-49b7-bb8e-c358105f8035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761124 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761150 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761171 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-images\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761208 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-config\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761229 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761249 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761269 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b881e81-67ed-4c33-a992-da59d7996b9d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761292 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2jk\" (UniqueName: \"kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761312 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5198f9e2-ae27-4804-ab74-0759a5217d89-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761333 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761352 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-config\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761373 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-stats-auth\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761393 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-images\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761411 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-metrics-certs\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761435 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-serving-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761454 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gczv\" (UniqueName: \"kubernetes.io/projected/e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e-kube-api-access-2gczv\") pod \"downloads-7954f5f757-9pq25\" (UID: \"e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e\") " pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761474 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4gld\" (UniqueName: \"kubernetes.io/projected/06e4a751-614f-49d2-8246-c76419d1ccb4-kube-api-access-j4gld\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761494 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3c4f7134-312f-4f1d-a344-80d44d65c371-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.761514 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad63def1-18c4-4841-a936-b7c7e42ce092-serving-cert\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.762880 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.764555 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.764879 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.770904 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.771475 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.772350 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.774131 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.774228 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-p97g8"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.774294 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-audit-dir\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.774736 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-auth-proxy-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.775143 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-config\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.775232 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.775682 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-config\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.775790 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-images\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.776456 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.777785 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.778274 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-encryption-config\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.778406 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.778809 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df659e7d-39ab-41ee-8df5-08896976666c-config\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.779052 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5198f9e2-ae27-4804-ab74-0759a5217d89-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.782458 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.782704 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.783719 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5968ec26-dea6-4e79-99b1-5954e173d226-node-pullsecrets\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.784161 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e9b2e26-976d-498c-88d8-dbddd520c9bf-audit-dir\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.785143 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.786164 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-serving-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.786555 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-client\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.786966 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.787361 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.798049 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-serving-cert\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.804358 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-audit\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.805504 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-machine-approver-tls\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.805648 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.805656 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.806016 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-serving-cert\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.806376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5198f9e2-ae27-4804-ab74-0759a5217d89-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.806578 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.806690 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.807069 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5968ec26-dea6-4e79-99b1-5954e173d226-etcd-client\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.807336 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-service-ca-bundle\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.807454 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2d87932-1993-464d-b3d2-71025526e1f2-trusted-ca\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.808038 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e9b2e26-976d-498c-88d8-dbddd520c9bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.808178 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.808498 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.808895 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809246 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d75st"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.810663 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5968ec26-dea6-4e79-99b1-5954e173d226-image-import-ca\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809393 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/df659e7d-39ab-41ee-8df5-08896976666c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809458 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809654 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809720 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809998 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595c8db4-733e-4729-aa34-8be7307043a8-config\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.810215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2d87932-1993-464d-b3d2-71025526e1f2-serving-cert\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.810246 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9e9b2e26-976d-498c-88d8-dbddd520c9bf-encryption-config\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.810441 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595c8db4-733e-4729-aa34-8be7307043a8-serving-cert\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.809327 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811682 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811717 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811730 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811744 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811758 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.811819 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.812172 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.814561 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jsc24"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.814589 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmrn5"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.815011 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vklwp"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.817615 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.819023 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p97g8"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.819895 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.820034 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d75st"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.820884 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mv87q"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.822248 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mv87q"] Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.822346 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.829733 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.850464 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862432 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862471 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79bb3a49-346f-49b7-bb8e-c358105f8035-config\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862498 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgn9\" (UniqueName: \"kubernetes.io/projected/ad63def1-18c4-4841-a936-b7c7e42ce092-kube-api-access-ldgn9\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862520 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggch\" (UniqueName: \"kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862542 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f60efd0-54f5-43eb-b824-f8eaa836df60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862581 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79bb3a49-346f-49b7-bb8e-c358105f8035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862602 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862620 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862660 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b881e81-67ed-4c33-a992-da59d7996b9d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862679 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj2jk\" (UniqueName: \"kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862700 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-stats-auth\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862910 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-images\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862927 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-metrics-certs\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862947 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gczv\" (UniqueName: \"kubernetes.io/projected/e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e-kube-api-access-2gczv\") pod \"downloads-7954f5f757-9pq25\" (UID: \"e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e\") " pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862965 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4gld\" (UniqueName: \"kubernetes.io/projected/06e4a751-614f-49d2-8246-c76419d1ccb4-kube-api-access-j4gld\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.862985 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3c4f7134-312f-4f1d-a344-80d44d65c371-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863004 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad63def1-18c4-4841-a936-b7c7e42ce092-serving-cert\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863030 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-default-certificate\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863049 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lqpt\" (UniqueName: \"kubernetes.io/projected/3c4f7134-312f-4f1d-a344-80d44d65c371-kube-api-access-5lqpt\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863077 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863097 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrbb\" (UniqueName: \"kubernetes.io/projected/57d54c43-611a-40f1-b05e-9a0007dbe3ec-kube-api-access-hwrbb\") pod \"migrator-59844c95c7-5tt6k\" (UID: \"57d54c43-611a-40f1-b05e-9a0007dbe3ec\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863154 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d639ff25-343e-4e7c-bd2e-f5fc533923f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863176 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863198 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fz4c\" (UniqueName: \"kubernetes.io/projected/d639ff25-343e-4e7c-bd2e-f5fc533923f4-kube-api-access-2fz4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863219 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863240 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad63def1-18c4-4841-a936-b7c7e42ce092-config\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863257 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34f3caca-1b4c-493d-a10b-277b42d7ce72-service-ca-bundle\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863285 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhg7\" (UniqueName: \"kubernetes.io/projected/5288b888-1b48-4590-8d10-f3688ba87a41-kube-api-access-rzhg7\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863305 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4f7134-312f-4f1d-a344-80d44d65c371-serving-cert\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863334 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpwks\" (UniqueName: \"kubernetes.io/projected/12ef881d-885a-4215-bd57-27966cb209b8-kube-api-access-fpwks\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863354 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48911b55-fb42-412b-9298-4cba1105a164-config\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863372 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhkq\" (UniqueName: \"kubernetes.io/projected/34f3caca-1b4c-493d-a10b-277b42d7ce72-kube-api-access-8rhkq\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863394 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79bb3a49-346f-49b7-bb8e-c358105f8035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48911b55-fb42-412b-9298-4cba1105a164-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863445 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5288b888-1b48-4590-8d10-f3688ba87a41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863465 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863498 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgcd\" (UniqueName: \"kubernetes.io/projected/6f60efd0-54f5-43eb-b824-f8eaa836df60-kube-api-access-rkgcd\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863538 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863558 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-srv-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863586 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863615 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12ef881d-885a-4215-bd57-27966cb209b8-proxy-tls\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863635 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b881e81-67ed-4c33-a992-da59d7996b9d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863659 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48911b55-fb42-412b-9298-4cba1105a164-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.863714 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff89g\" (UniqueName: \"kubernetes.io/projected/4b881e81-67ed-4c33-a992-da59d7996b9d-kube-api-access-ff89g\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.864622 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.865336 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79bb3a49-346f-49b7-bb8e-c358105f8035-config\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.868285 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f60efd0-54f5-43eb-b824-f8eaa836df60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.869376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.870486 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3c4f7134-312f-4f1d-a344-80d44d65c371-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.871139 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.871956 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48911b55-fb42-412b-9298-4cba1105a164-config\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.872028 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c4f7134-312f-4f1d-a344-80d44d65c371-serving-cert\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.872748 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.873461 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.873696 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.875995 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48911b55-fb42-412b-9298-4cba1105a164-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.876492 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79bb3a49-346f-49b7-bb8e-c358105f8035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.876870 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.882653 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.889271 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.895919 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d639ff25-343e-4e7c-bd2e-f5fc533923f4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.909756 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.929872 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.936221 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b881e81-67ed-4c33-a992-da59d7996b9d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.950483 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.969612 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.972449 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b881e81-67ed-4c33-a992-da59d7996b9d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:10 crc kubenswrapper[4873]: I0219 09:47:10.989689 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.010380 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.030391 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.050328 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.070686 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.077657 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-srv-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.090893 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.094846 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.102844 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06e4a751-614f-49d2-8246-c76419d1ccb4-profile-collector-cert\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.109983 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.130713 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.137337 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5288b888-1b48-4590-8d10-f3688ba87a41-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.150163 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.170156 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.189956 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.210299 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.219191 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad63def1-18c4-4841-a936-b7c7e42ce092-serving-cert\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.230555 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.240737 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad63def1-18c4-4841-a936-b7c7e42ce092-config\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.251493 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.271396 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.290540 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.296416 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-stats-auth\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.310422 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.315965 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-metrics-certs\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.331347 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.350604 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.359321 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/34f3caca-1b4c-493d-a10b-277b42d7ce72-default-certificate\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.370823 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.390169 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.401618 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34f3caca-1b4c-493d-a10b-277b42d7ce72-service-ca-bundle\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.410758 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.415391 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.430741 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.450677 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.452282 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12ef881d-885a-4215-bd57-27966cb209b8-images\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.471380 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.490694 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.498735 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12ef881d-885a-4215-bd57-27966cb209b8-proxy-tls\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.522532 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.530819 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.551193 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.570093 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.590502 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.610838 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.630909 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.651851 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.671805 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.691529 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.710325 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.729073 4873 request.go:700] Waited for 1.015780616s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0 Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.731745 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.750834 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.770255 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.802412 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.810901 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.830794 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.850771 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.871362 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.892734 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.911313 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.931553 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.950911 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.970783 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 09:47:11 crc kubenswrapper[4873]: I0219 09:47:11.989931 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.012066 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.031823 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.050475 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.070473 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.104552 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.110725 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.131002 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.150840 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.171728 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.191091 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.211010 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.231834 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.251399 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.271571 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.291688 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.310991 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.331271 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.351562 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.371781 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.391475 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.439988 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxsx\" (UniqueName: \"kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx\") pod \"oauth-openshift-558db77b4-4g545\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.458545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjkk\" (UniqueName: \"kubernetes.io/projected/3300ef2b-adb8-4aea-b8ef-cdec19d504b3-kube-api-access-ltjkk\") pod \"machine-approver-56656f9798-r7gp2\" (UID: \"3300ef2b-adb8-4aea-b8ef-cdec19d504b3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.483407 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbm8s\" (UniqueName: \"kubernetes.io/projected/df659e7d-39ab-41ee-8df5-08896976666c-kube-api-access-tbm8s\") pod \"machine-api-operator-5694c8668f-k627b\" (UID: \"df659e7d-39ab-41ee-8df5-08896976666c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.500875 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4sz\" (UniqueName: \"kubernetes.io/projected/5198f9e2-ae27-4804-ab74-0759a5217d89-kube-api-access-gd4sz\") pod \"openshift-apiserver-operator-796bbdcf4f-jt5wx\" (UID: \"5198f9e2-ae27-4804-ab74-0759a5217d89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.509429 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.518840 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmb7k\" (UniqueName: \"kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k\") pod \"controller-manager-879f6c89f-qvxgz\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.528295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4wz\" (UniqueName: \"kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz\") pod \"route-controller-manager-6576b87f9c-qltqp\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.529426 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.536526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.547160 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9hc\" (UniqueName: \"kubernetes.io/projected/9e9b2e26-976d-498c-88d8-dbddd520c9bf-kube-api-access-5l9hc\") pod \"apiserver-7bbb656c7d-7bgm9\" (UID: \"9e9b2e26-976d-498c-88d8-dbddd520c9bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.558345 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.565913 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkhf\" (UniqueName: \"kubernetes.io/projected/595c8db4-733e-4729-aa34-8be7307043a8-kube-api-access-6jkhf\") pod \"authentication-operator-69f744f599-jsc24\" (UID: \"595c8db4-733e-4729-aa34-8be7307043a8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.587858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfvdj\" (UniqueName: \"kubernetes.io/projected/bb18a52e-b1db-406b-a2e8-88a1ae8b05fc-kube-api-access-nfvdj\") pod \"openshift-controller-manager-operator-756b6f6bc6-2kgbd\" (UID: \"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.606973 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.607012 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csvqh\" (UniqueName: \"kubernetes.io/projected/b2d87932-1993-464d-b3d2-71025526e1f2-kube-api-access-csvqh\") pod \"console-operator-58897d9998-dxcz7\" (UID: \"b2d87932-1993-464d-b3d2-71025526e1f2\") " pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.629544 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.636332 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r54wb\" (UniqueName: \"kubernetes.io/projected/5968ec26-dea6-4e79-99b1-5954e173d226-kube-api-access-r54wb\") pod \"apiserver-76f77b778f-gbzll\" (UID: \"5968ec26-dea6-4e79-99b1-5954e173d226\") " pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.650983 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.670786 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.690381 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.702269 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.710319 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.730478 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.748399 4873 request.go:700] Waited for 1.936356801s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.751130 4873 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.751362 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.766510 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.770675 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.798794 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.811413 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.833238 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.851061 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.889847 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff89g\" (UniqueName: \"kubernetes.io/projected/4b881e81-67ed-4c33-a992-da59d7996b9d-kube-api-access-ff89g\") pod \"kube-storage-version-migrator-operator-b67b599dd-bxfwb\" (UID: \"4b881e81-67ed-4c33-a992-da59d7996b9d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.912767 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgn9\" (UniqueName: \"kubernetes.io/projected/ad63def1-18c4-4841-a936-b7c7e42ce092-kube-api-access-ldgn9\") pod \"service-ca-operator-777779d784-wfq9w\" (UID: \"ad63def1-18c4-4841-a936-b7c7e42ce092\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.924170 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.930386 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ggch\" (UniqueName: \"kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch\") pod \"console-f9d7485db-shnwj\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.943667 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.950666 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79bb3a49-346f-49b7-bb8e-c358105f8035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f24fn\" (UID: \"79bb3a49-346f-49b7-bb8e-c358105f8035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.964015 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhg7\" (UniqueName: \"kubernetes.io/projected/5288b888-1b48-4590-8d10-f3688ba87a41-kube-api-access-rzhg7\") pod \"multus-admission-controller-857f4d67dd-lkp4m\" (UID: \"5288b888-1b48-4590-8d10-f3688ba87a41\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.970241 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd"] Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.981398 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.994976 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj2jk\" (UniqueName: \"kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk\") pod \"collect-profiles-29524905-jqdfw\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:12 crc kubenswrapper[4873]: I0219 09:47:12.996259 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.002288 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.005882 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k627b"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.007539 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gczv\" (UniqueName: \"kubernetes.io/projected/e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e-kube-api-access-2gczv\") pod \"downloads-7954f5f757-9pq25\" (UID: \"e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e\") " pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.011490 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.018263 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c1d3a6_23fd_4526_8892_0add23b09a9a.slice/crio-3919526da5da79321b05444b65501cd491975ca30007c3620a85b734545d5c95 WatchSource:0}: Error finding container 3919526da5da79321b05444b65501cd491975ca30007c3620a85b734545d5c95: Status 404 returned error can't find the container with id 3919526da5da79321b05444b65501cd491975ca30007c3620a85b734545d5c95 Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.024720 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.025028 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb18a52e_b1db_406b_a2e8_88a1ae8b05fc.slice/crio-9f71a4f3f66d0666fb1b17346581f9f693cb4bf9056801187b3c9e9a209ec54e WatchSource:0}: Error finding container 9f71a4f3f66d0666fb1b17346581f9f693cb4bf9056801187b3c9e9a209ec54e: Status 404 returned error can't find the container with id 9f71a4f3f66d0666fb1b17346581f9f693cb4bf9056801187b3c9e9a209ec54e Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.026140 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4gld\" (UniqueName: \"kubernetes.io/projected/06e4a751-614f-49d2-8246-c76419d1ccb4-kube-api-access-j4gld\") pod \"catalog-operator-68c6474976-dg6jw\" (UID: \"06e4a751-614f-49d2-8246-c76419d1ccb4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.031859 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.047392 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.050746 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpwks\" (UniqueName: \"kubernetes.io/projected/12ef881d-885a-4215-bd57-27966cb209b8-kube-api-access-fpwks\") pod \"machine-config-operator-74547568cd-pzspc\" (UID: \"12ef881d-885a-4215-bd57-27966cb209b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.055117 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.082931 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhkq\" (UniqueName: \"kubernetes.io/projected/34f3caca-1b4c-493d-a10b-277b42d7ce72-kube-api-access-8rhkq\") pod \"router-default-5444994796-kzpbf\" (UID: \"34f3caca-1b4c-493d-a10b-277b42d7ce72\") " pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.090890 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgcd\" (UniqueName: \"kubernetes.io/projected/6f60efd0-54f5-43eb-b824-f8eaa836df60-kube-api-access-rkgcd\") pod \"cluster-samples-operator-665b6dd947-9hwg5\" (UID: \"6f60efd0-54f5-43eb-b824-f8eaa836df60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.095410 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.107528 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48911b55-fb42-412b-9298-4cba1105a164-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hs5fr\" (UID: \"48911b55-fb42-412b-9298-4cba1105a164\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.108802 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jsc24"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.109904 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.127281 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrbb\" (UniqueName: \"kubernetes.io/projected/57d54c43-611a-40f1-b05e-9a0007dbe3ec-kube-api-access-hwrbb\") pod \"migrator-59844c95c7-5tt6k\" (UID: \"57d54c43-611a-40f1-b05e-9a0007dbe3ec\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.153461 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lqpt\" (UniqueName: \"kubernetes.io/projected/3c4f7134-312f-4f1d-a344-80d44d65c371-kube-api-access-5lqpt\") pod \"openshift-config-operator-7777fb866f-gk5mg\" (UID: \"3c4f7134-312f-4f1d-a344-80d44d65c371\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.163438 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gbzll"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.173541 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fz4c\" (UniqueName: \"kubernetes.io/projected/d639ff25-343e-4e7c-bd2e-f5fc533923f4-kube-api-access-2fz4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-s67xb\" (UID: \"d639ff25-343e-4e7c-bd2e-f5fc533923f4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.206951 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.207016 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-serving-cert\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.207038 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-srv-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.207089 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/21aad9a0-00de-4f42-9923-6c66c79a3a8d-proxy-tls\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.207430 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.207473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fmb\" (UniqueName: \"kubernetes.io/projected/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-kube-api-access-b4fmb\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.207931 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:13.707917592 +0000 UTC m=+142.997349230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.208210 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5968ec26_dea6_4e79_99b1_5954e173d226.slice/crio-8af950b8e44dbf5989eeba42ba1cc61d77c347933b88af6cc094181042dc3724 WatchSource:0}: Error finding container 8af950b8e44dbf5989eeba42ba1cc61d77c347933b88af6cc094181042dc3724: Status 404 returned error can't find the container with id 8af950b8e44dbf5989eeba42ba1cc61d77c347933b88af6cc094181042dc3724 Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208338 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5920bdb-afd9-401e-8f11-108a90660e1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208430 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6tw\" (UniqueName: \"kubernetes.io/projected/7671d99c-f025-4e36-b336-106655ec13ef-kube-api-access-5c6tw\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208850 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208883 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208912 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21aad9a0-00de-4f42-9923-6c66c79a3a8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.208944 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-client\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209183 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpfg6\" (UniqueName: \"kubernetes.io/projected/829eb540-5f77-4748-a99d-c5bdbd13c26f-kube-api-access-gpfg6\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209212 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209241 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5e97ddb-b404-4ce2-b760-2739c36c755a-tmpfs\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209301 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69ad03d-7d61-4b31-a556-325751fcba8e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209437 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69ad03d-7d61-4b31-a556-325751fcba8e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.209770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-key\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.210397 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.210428 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-config\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.210535 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf2de5cd-4280-4c0c-9276-b693a51986b7-cert\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.210597 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bf2ad48-6696-4f08-adc8-330fd4c25028-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.210896 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69ad03d-7d61-4b31-a556-325751fcba8e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211425 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-webhook-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211453 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-service-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211532 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpss\" (UniqueName: \"kubernetes.io/projected/40382b72-88a7-4f37-9192-a555a259d4bd-kube-api-access-slpss\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211821 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-cabundle\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211850 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbm8\" (UniqueName: \"kubernetes.io/projected/21aad9a0-00de-4f42-9923-6c66c79a3a8d-kube-api-access-xdbm8\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.211911 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wm6\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.212576 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.212598 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.212874 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.212895 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66n7\" (UniqueName: \"kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.213166 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.213340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstl4\" (UniqueName: \"kubernetes.io/projected/bf2de5cd-4280-4c0c-9276-b693a51986b7-kube-api-access-sstl4\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.213369 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h4w\" (UniqueName: \"kubernetes.io/projected/4bf2ad48-6696-4f08-adc8-330fd4c25028-kube-api-access-c8h4w\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.213645 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-certs\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.213770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e3dce33-cc6d-41b5-ac17-481a98c06373-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.214002 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.214090 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.214268 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.214348 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5920bdb-afd9-401e-8f11-108a90660e1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.214792 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7zn\" (UniqueName: \"kubernetes.io/projected/c5e97ddb-b404-4ce2-b760-2739c36c755a-kube-api-access-tm7zn\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.215003 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4xtz\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-kube-api-access-p4xtz\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.215035 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-node-bootstrap-token\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.215192 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trrg\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-kube-api-access-8trrg\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.215237 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478rd\" (UniqueName: \"kubernetes.io/projected/5b283da7-d736-4ac2-a290-e142728e838a-kube-api-access-478rd\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.215302 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7671d99c-f025-4e36-b336-106655ec13ef-metrics-tls\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.216310 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e3dce33-cc6d-41b5-ac17-481a98c06373-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.237527 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.242089 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" event={"ID":"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc","Type":"ContainerStarted","Data":"9f71a4f3f66d0666fb1b17346581f9f693cb4bf9056801187b3c9e9a209ec54e"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.248154 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.249168 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" event={"ID":"9324aa8b-fbce-42bb-b339-0aa2e382efd4","Type":"ContainerStarted","Data":"b4cafb3addf61abe3b1441fa50a8321f11c79cf993ea43c1a09c9c8ca90fbdfc"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.253426 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" event={"ID":"df659e7d-39ab-41ee-8df5-08896976666c","Type":"ContainerStarted","Data":"996479731ccd4ae0ac95d94c3bc6866a40d655d2e667a16ece349b1dbabe65e8"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.253516 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.256472 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxcz7"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.260290 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.263585 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" event={"ID":"3300ef2b-adb8-4aea-b8ef-cdec19d504b3","Type":"ContainerStarted","Data":"a238776f2ad55cef9e1e6259d58b9ebc7c330639905481a3404c60dd5028fe51"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.263620 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" event={"ID":"3300ef2b-adb8-4aea-b8ef-cdec19d504b3","Type":"ContainerStarted","Data":"ce636371d8d8e9e398cd742b48ffdca9fd56e2e90cf89e766fb4420d1d841433"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.263630 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" event={"ID":"3300ef2b-adb8-4aea-b8ef-cdec19d504b3","Type":"ContainerStarted","Data":"751e435e40fd0f90bb69689c922fbfeccee4e0004e8903532e7dc95dab50d8db"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.275367 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" event={"ID":"c8c1d3a6-23fd-4526-8892-0add23b09a9a","Type":"ContainerStarted","Data":"3919526da5da79321b05444b65501cd491975ca30007c3620a85b734545d5c95"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.282183 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" event={"ID":"5968ec26-dea6-4e79-99b1-5954e173d226","Type":"ContainerStarted","Data":"8af950b8e44dbf5989eeba42ba1cc61d77c347933b88af6cc094181042dc3724"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.283502 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" event={"ID":"9e9b2e26-976d-498c-88d8-dbddd520c9bf","Type":"ContainerStarted","Data":"a151c797426fa44538a258538af785a475d0df407621ec0f555080c653e8112f"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.288122 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" event={"ID":"595c8db4-733e-4729-aa34-8be7307043a8","Type":"ContainerStarted","Data":"d9f1af9debc32423a5b203643faad3d9624bbdf1a35fb0469cf8556ee6523f59"} Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.292560 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2d87932_1993_464d_b3d2_71025526e1f2.slice/crio-52c8c7565f7e76e2b1be9ed7ec38817313b26529cc4216d67909c7d5b38aadf8 WatchSource:0}: Error finding container 52c8c7565f7e76e2b1be9ed7ec38817313b26529cc4216d67909c7d5b38aadf8: Status 404 returned error can't find the container with id 52c8c7565f7e76e2b1be9ed7ec38817313b26529cc4216d67909c7d5b38aadf8 Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.293051 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.298289 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.304734 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318375 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318546 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318697 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf2de5cd-4280-4c0c-9276-b693a51986b7-cert\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bf2ad48-6696-4f08-adc8-330fd4c25028-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69ad03d-7d61-4b31-a556-325751fcba8e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318801 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-plugins-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318825 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318843 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-webhook-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318862 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-service-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slpss\" (UniqueName: \"kubernetes.io/projected/40382b72-88a7-4f37-9192-a555a259d4bd-kube-api-access-slpss\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318903 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-cabundle\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318926 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbm8\" (UniqueName: \"kubernetes.io/projected/21aad9a0-00de-4f42-9923-6c66c79a3a8d-kube-api-access-xdbm8\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318947 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7wm6\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318966 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.318984 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319002 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-mountpoint-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319018 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319033 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176cb3ad-1201-420f-bdb2-586f974aeaf2-config-volume\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319049 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66n7\" (UniqueName: \"kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319064 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319080 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstl4\" (UniqueName: \"kubernetes.io/projected/bf2de5cd-4280-4c0c-9276-b693a51986b7-kube-api-access-sstl4\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319095 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8h4w\" (UniqueName: \"kubernetes.io/projected/4bf2ad48-6696-4f08-adc8-330fd4c25028-kube-api-access-c8h4w\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319154 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-certs\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319175 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e3dce33-cc6d-41b5-ac17-481a98c06373-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319197 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319217 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319239 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319262 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5920bdb-afd9-401e-8f11-108a90660e1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319284 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7zn\" (UniqueName: \"kubernetes.io/projected/c5e97ddb-b404-4ce2-b760-2739c36c755a-kube-api-access-tm7zn\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319307 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4xtz\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-kube-api-access-p4xtz\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319325 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-node-bootstrap-token\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319348 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trrg\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-kube-api-access-8trrg\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319371 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478rd\" (UniqueName: \"kubernetes.io/projected/5b283da7-d736-4ac2-a290-e142728e838a-kube-api-access-478rd\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319393 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7671d99c-f025-4e36-b336-106655ec13ef-metrics-tls\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-registration-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.319452 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-csi-data-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320177 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e3dce33-cc6d-41b5-ac17-481a98c06373-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320235 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320728 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-serving-cert\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320761 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-srv-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320812 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/21aad9a0-00de-4f42-9923-6c66c79a3a8d-proxy-tls\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320930 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/176cb3ad-1201-420f-bdb2-586f974aeaf2-metrics-tls\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.320993 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fmb\" (UniqueName: \"kubernetes.io/projected/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-kube-api-access-b4fmb\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321061 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5920bdb-afd9-401e-8f11-108a90660e1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321090 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t257b\" (UniqueName: \"kubernetes.io/projected/176cb3ad-1201-420f-bdb2-586f974aeaf2-kube-api-access-t257b\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321164 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6tw\" (UniqueName: \"kubernetes.io/projected/7671d99c-f025-4e36-b336-106655ec13ef-kube-api-access-5c6tw\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321226 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321254 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321306 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21aad9a0-00de-4f42-9923-6c66c79a3a8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321333 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-client\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjv2\" (UniqueName: \"kubernetes.io/projected/2877ec4c-7a3e-4105-ac87-6d096df10661-kube-api-access-qfjv2\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321664 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpfg6\" (UniqueName: \"kubernetes.io/projected/829eb540-5f77-4748-a99d-c5bdbd13c26f-kube-api-access-gpfg6\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.321683 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:13.821659654 +0000 UTC m=+143.111091292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321717 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5e97ddb-b404-4ce2-b760-2739c36c755a-tmpfs\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321746 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321776 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69ad03d-7d61-4b31-a556-325751fcba8e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321801 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-socket-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321828 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69ad03d-7d61-4b31-a556-325751fcba8e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321851 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-key\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321873 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.321888 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-config\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.322626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-config\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.323707 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.323984 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c5e97ddb-b404-4ce2-b760-2739c36c755a-tmpfs\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.324153 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.325854 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.325977 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e3dce33-cc6d-41b5-ac17-481a98c06373-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.327464 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.327787 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5920bdb-afd9-401e-8f11-108a90660e1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.329162 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f69ad03d-7d61-4b31-a556-325751fcba8e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.331836 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-service-ca\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.333616 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf2de5cd-4280-4c0c-9276-b693a51986b7-cert\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.333802 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.333808 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-serving-cert\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.335612 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.336095 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7671d99c-f025-4e36-b336-106655ec13ef-metrics-tls\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.336533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-srv-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.337232 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-cabundle\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.337510 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-webhook-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.337546 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.338666 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.339241 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c5e97ddb-b404-4ce2-b760-2739c36c755a-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.339514 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b283da7-d736-4ac2-a290-e142728e838a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.340523 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-certs\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.340563 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e3dce33-cc6d-41b5-ac17-481a98c06373-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.340602 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40382b72-88a7-4f37-9192-a555a259d4bd-etcd-client\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.341300 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" event={"ID":"5198f9e2-ae27-4804-ab74-0759a5217d89","Type":"ContainerStarted","Data":"5c3be8f3d5492a9f7a86dbd9963bf3d401162083d542b8b1814d89232233bc6a"} Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.341605 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21aad9a0-00de-4f42-9923-6c66c79a3a8d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.342358 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/21aad9a0-00de-4f42-9923-6c66c79a3a8d-proxy-tls\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.342470 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bf2ad48-6696-4f08-adc8-330fd4c25028-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.342622 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5920bdb-afd9-401e-8f11-108a90660e1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.342871 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69ad03d-7d61-4b31-a556-325751fcba8e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.345215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.348913 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/829eb540-5f77-4748-a99d-c5bdbd13c26f-signing-key\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.356887 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-node-bootstrap-token\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.357073 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5288b888_1b48_4590_8d10_f3688ba87a41.slice/crio-f764265afa3ab2c3c644630fe40a06105e4fbe716760c85c00682cc829d0c1c4 WatchSource:0}: Error finding container f764265afa3ab2c3c644630fe40a06105e4fbe716760c85c00682cc829d0c1c4: Status 404 returned error can't find the container with id f764265afa3ab2c3c644630fe40a06105e4fbe716760c85c00682cc829d0c1c4 Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.357093 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lkp4m"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.357740 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpfg6\" (UniqueName: \"kubernetes.io/projected/829eb540-5f77-4748-a99d-c5bdbd13c26f-kube-api-access-gpfg6\") pod \"service-ca-9c57cc56f-tjxkj\" (UID: \"829eb540-5f77-4748-a99d-c5bdbd13c26f\") " pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.373316 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstl4\" (UniqueName: \"kubernetes.io/projected/bf2de5cd-4280-4c0c-9276-b693a51986b7-kube-api-access-sstl4\") pod \"ingress-canary-p97g8\" (UID: \"bf2de5cd-4280-4c0c-9276-b693a51986b7\") " pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.386096 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.389274 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.405155 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.412544 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.414273 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.422873 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t257b\" (UniqueName: \"kubernetes.io/projected/176cb3ad-1201-420f-bdb2-586f974aeaf2-kube-api-access-t257b\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.422917 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfjv2\" (UniqueName: \"kubernetes.io/projected/2877ec4c-7a3e-4105-ac87-6d096df10661-kube-api-access-qfjv2\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.422943 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-socket-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.422983 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-plugins-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423021 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-mountpoint-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423035 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176cb3ad-1201-420f-bdb2-586f974aeaf2-config-volume\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423139 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-registration-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423180 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-csi-data-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423212 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/176cb3ad-1201-420f-bdb2-586f974aeaf2-metrics-tls\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.423234 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.423536 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:13.923524693 +0000 UTC m=+143.212956331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.424074 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-socket-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.424133 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-plugins-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.424165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-mountpoint-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.424706 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-csi-data-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.424776 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2877ec4c-7a3e-4105-ac87-6d096df10661-registration-dir\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.425571 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176cb3ad-1201-420f-bdb2-586f974aeaf2-config-volume\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.428087 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/176cb3ad-1201-420f-bdb2-586f974aeaf2-metrics-tls\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.429167 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66n7\" (UniqueName: \"kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7\") pod \"marketplace-operator-79b997595-86hhq\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.454322 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7zn\" (UniqueName: \"kubernetes.io/projected/c5e97ddb-b404-4ce2-b760-2739c36c755a-kube-api-access-tm7zn\") pod \"packageserver-d55dfcdfc-rpsnj\" (UID: \"c5e97ddb-b404-4ce2-b760-2739c36c755a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.464562 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p97g8" Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.471947 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ef881d_885a_4215_bd57_27966cb209b8.slice/crio-5fd5fa13184676d249c8368aa4595a57293f54ca5e72b3279b3a50e064929529 WatchSource:0}: Error finding container 5fd5fa13184676d249c8368aa4595a57293f54ca5e72b3279b3a50e064929529: Status 404 returned error can't find the container with id 5fd5fa13184676d249c8368aa4595a57293f54ca5e72b3279b3a50e064929529 Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.474215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6tw\" (UniqueName: \"kubernetes.io/projected/7671d99c-f025-4e36-b336-106655ec13ef-kube-api-access-5c6tw\") pod \"dns-operator-744455d44c-qmrn5\" (UID: \"7671d99c-f025-4e36-b336-106655ec13ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.499305 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4xtz\" (UniqueName: \"kubernetes.io/projected/a5920bdb-afd9-401e-8f11-108a90660e1c-kube-api-access-p4xtz\") pod \"ingress-operator-5b745b69d9-mwl9k\" (UID: \"a5920bdb-afd9-401e-8f11-108a90660e1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.507923 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fmb\" (UniqueName: \"kubernetes.io/projected/7a3637cc-cfef-446c-b0fb-f37f3396e0d7-kube-api-access-b4fmb\") pod \"machine-config-server-2798g\" (UID: \"7a3637cc-cfef-446c-b0fb-f37f3396e0d7\") " pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.514758 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.514785 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.523684 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.524130 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.024090776 +0000 UTC m=+143.313522404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.525299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69ad03d-7d61-4b31-a556-325751fcba8e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kx6gv\" (UID: \"f69ad03d-7d61-4b31-a556-325751fcba8e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.546255 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slpss\" (UniqueName: \"kubernetes.io/projected/40382b72-88a7-4f37-9192-a555a259d4bd-kube-api-access-slpss\") pod \"etcd-operator-b45778765-vklwp\" (UID: \"40382b72-88a7-4f37-9192-a555a259d4bd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: W0219 09:47:13.553079 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34f3caca_1b4c_493d_a10b_277b42d7ce72.slice/crio-ae45da8c938f4f64005c9bf6e462b1759bbbeee8b1b60b03c29e259f6c2bb43b WatchSource:0}: Error finding container ae45da8c938f4f64005c9bf6e462b1759bbbeee8b1b60b03c29e259f6c2bb43b: Status 404 returned error can't find the container with id ae45da8c938f4f64005c9bf6e462b1759bbbeee8b1b60b03c29e259f6c2bb43b Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.566301 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.586707 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.613998 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478rd\" (UniqueName: \"kubernetes.io/projected/5b283da7-d736-4ac2-a290-e142728e838a-kube-api-access-478rd\") pod \"olm-operator-6b444d44fb-2b5f5\" (UID: \"5b283da7-d736-4ac2-a290-e142728e838a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.624977 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.625308 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trrg\" (UniqueName: \"kubernetes.io/projected/2e3dce33-cc6d-41b5-ac17-481a98c06373-kube-api-access-8trrg\") pod \"cluster-image-registry-operator-dc59b4c8b-8cxf7\" (UID: \"2e3dce33-cc6d-41b5-ac17-481a98c06373\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.626555 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.126538912 +0000 UTC m=+143.415970560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.663229 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8h4w\" (UniqueName: \"kubernetes.io/projected/4bf2ad48-6696-4f08-adc8-330fd4c25028-kube-api-access-c8h4w\") pod \"package-server-manager-789f6589d5-jjkrt\" (UID: \"4bf2ad48-6696-4f08-adc8-330fd4c25028\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.663572 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.664136 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.669304 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.679599 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.680229 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7wm6\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.692838 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.695207 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbm8\" (UniqueName: \"kubernetes.io/projected/21aad9a0-00de-4f42-9923-6c66c79a3a8d-kube-api-access-xdbm8\") pod \"machine-config-controller-84d6567774-24gcv\" (UID: \"21aad9a0-00de-4f42-9923-6c66c79a3a8d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.696230 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw"] Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.698044 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.710886 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t257b\" (UniqueName: \"kubernetes.io/projected/176cb3ad-1201-420f-bdb2-586f974aeaf2-kube-api-access-t257b\") pod \"dns-default-mv87q\" (UID: \"176cb3ad-1201-420f-bdb2-586f974aeaf2\") " pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.711672 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.717516 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.727369 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.727817 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.727981 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.227960339 +0000 UTC m=+143.517392037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.743225 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfjv2\" (UniqueName: \"kubernetes.io/projected/2877ec4c-7a3e-4105-ac87-6d096df10661-kube-api-access-qfjv2\") pod \"csi-hostpathplugin-d75st\" (UID: \"2877ec4c-7a3e-4105-ac87-6d096df10661\") " pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.749481 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2798g" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.753353 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.803568 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.804376 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d75st" Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.848192 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.848617 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.348601669 +0000 UTC m=+143.638033317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.950431 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.950694 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.450668015 +0000 UTC m=+143.740099663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:13 crc kubenswrapper[4873]: I0219 09:47:13.950846 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:13 crc kubenswrapper[4873]: E0219 09:47:13.952071 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.452059185 +0000 UTC m=+143.741490833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:13.996539 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.002364 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.010902 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.042697 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7gp2" podStartSLOduration=117.042677051 podStartE2EDuration="1m57.042677051s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:14.022363327 +0000 UTC m=+143.311794965" watchObservedRunningTime="2026-02-19 09:47:14.042677051 +0000 UTC m=+143.332108709" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.044914 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.052536 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.052952 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.552924876 +0000 UTC m=+143.842356514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.086932 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9pq25"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.087895 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.153734 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.154073 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.654062435 +0000 UTC m=+143.943494073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.186829 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tjxkj"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.211041 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p97g8"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.255436 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.255655 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.755614766 +0000 UTC m=+144.045046434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.255798 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.256631 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.756602784 +0000 UTC m=+144.046034422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.264612 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.325705 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7"] Feb 19 09:47:14 crc kubenswrapper[4873]: W0219 09:47:14.328228 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57d54c43_611a_40f1_b05e_9a0007dbe3ec.slice/crio-1ef6994b71107d589c7222906c9783e6710458ddf932d1bdcaf1c6580c2ab010 WatchSource:0}: Error finding container 1ef6994b71107d589c7222906c9783e6710458ddf932d1bdcaf1c6580c2ab010: Status 404 returned error can't find the container with id 1ef6994b71107d589c7222906c9783e6710458ddf932d1bdcaf1c6580c2ab010 Feb 19 09:47:14 crc kubenswrapper[4873]: W0219 09:47:14.347695 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f6a9a1_70e5_46ce_97aa_3dc9d26c672e.slice/crio-0daa5ed7cee4ce149fc32b28c1540ff717542388480b099848cac564d411d3c5 WatchSource:0}: Error finding container 0daa5ed7cee4ce149fc32b28c1540ff717542388480b099848cac564d411d3c5: Status 404 returned error can't find the container with id 0daa5ed7cee4ce149fc32b28c1540ff717542388480b099848cac564d411d3c5 Feb 19 09:47:14 crc kubenswrapper[4873]: W0219 09:47:14.350966 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e4a751_614f_49d2_8246_c76419d1ccb4.slice/crio-7f930d77d6c1b794c971625c5385cd1fdd036f11c2a23b5d63fabb2ad61b5233 WatchSource:0}: Error finding container 7f930d77d6c1b794c971625c5385cd1fdd036f11c2a23b5d63fabb2ad61b5233: Status 404 returned error can't find the container with id 7f930d77d6c1b794c971625c5385cd1fdd036f11c2a23b5d63fabb2ad61b5233 Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.357710 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.358639 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.858622708 +0000 UTC m=+144.148054346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.357744 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" event={"ID":"5198f9e2-ae27-4804-ab74-0759a5217d89","Type":"ContainerStarted","Data":"cd05e369e7782989017ff1d32b735cd57d5b43295d78b3f652974d003ba29fb5"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.370608 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" event={"ID":"df659e7d-39ab-41ee-8df5-08896976666c","Type":"ContainerStarted","Data":"97010ed8078531daee035bd37dff34f975c08ac9590e5e55ac162c150e964363"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.370653 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" event={"ID":"df659e7d-39ab-41ee-8df5-08896976666c","Type":"ContainerStarted","Data":"99a5de09374b447b2a6cec4a863f8114702137a0313f0efe6a40ce78a9023ce6"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.385808 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" event={"ID":"3c4f7134-312f-4f1d-a344-80d44d65c371","Type":"ContainerStarted","Data":"0bc85a48940755ef83d2dac6f27402777234d5037699b4a2566555cf4a968d32"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.387595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" event={"ID":"57d54c43-611a-40f1-b05e-9a0007dbe3ec","Type":"ContainerStarted","Data":"1ef6994b71107d589c7222906c9783e6710458ddf932d1bdcaf1c6580c2ab010"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.395562 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" event={"ID":"79bb3a49-346f-49b7-bb8e-c358105f8035","Type":"ContainerStarted","Data":"c802d62e0598cf4480ef9c8f5a2bddefb86ea0c808ed027dd26a5ef64907d8b5"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.398170 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.400157 4873 csr.go:261] certificate signing request csr-ckc9p is approved, waiting to be issued Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.403806 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" event={"ID":"9e9b2e26-976d-498c-88d8-dbddd520c9bf","Type":"ContainerStarted","Data":"124dc5c9ec08792dda6c444f4fb30401ef2bc5d4dac9d8ec3bd082febd153f0a"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.406415 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" event={"ID":"bb18a52e-b1db-406b-a2e8-88a1ae8b05fc","Type":"ContainerStarted","Data":"73d8bf9e9107a32db28a8a87ed6db234be57692c177e07c6e50d5e5fc93dd98b"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.409343 4873 csr.go:257] certificate signing request csr-ckc9p is issued Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.412551 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" event={"ID":"12ef881d-885a-4215-bd57-27966cb209b8","Type":"ContainerStarted","Data":"5fd5fa13184676d249c8368aa4595a57293f54ca5e72b3279b3a50e064929529"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.416359 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" event={"ID":"6f60efd0-54f5-43eb-b824-f8eaa836df60","Type":"ContainerStarted","Data":"1e01e7bdb7bbbc36454cea754639a33a0dfeec0a18d5a42fedc4ef2e29d29e2a"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.422254 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" event={"ID":"5288b888-1b48-4590-8d10-f3688ba87a41","Type":"ContainerStarted","Data":"f764265afa3ab2c3c644630fe40a06105e4fbe716760c85c00682cc829d0c1c4"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.429466 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-shnwj" event={"ID":"10aa25f4-7549-468a-b42f-19305ad066dd","Type":"ContainerStarted","Data":"7a581424f0da8ea44b76eb3be0d323e922f9fdfbe4bef5b6c66bc43929d92666"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.435992 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" event={"ID":"ad63def1-18c4-4841-a936-b7c7e42ce092","Type":"ContainerStarted","Data":"22633f6be15e3066ccd2857d6ceca2e0f25fb8362c2866a0b4c282b81d4d81a2"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.442840 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.444078 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" event={"ID":"de77b9aa-b558-4431-b116-5e1e1cc116f3","Type":"ContainerStarted","Data":"a018522e013b75a19d6f1ebe089ac24d73537d03912cdb27eb2e286e6cfe33f1"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.447178 4873 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4g545 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.447236 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.447616 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" event={"ID":"4b881e81-67ed-4c33-a992-da59d7996b9d","Type":"ContainerStarted","Data":"36bec1813ef9454858204464e136a511460f50dc9a16eee6fd55fe827e0740ca"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.459367 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.473303 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" event={"ID":"595c8db4-733e-4729-aa34-8be7307043a8","Type":"ContainerStarted","Data":"6baff13dd3bc0e3cfc61c01c263ffef22ede179c3b6f7ae6ac471e48b8576db9"} Feb 19 09:47:14 crc kubenswrapper[4873]: W0219 09:47:14.475377 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e3dce33_cc6d_41b5_ac17_481a98c06373.slice/crio-efe28b822db8d99d38099b70f217108261222e01289341596d0f1c9c87197060 WatchSource:0}: Error finding container efe28b822db8d99d38099b70f217108261222e01289341596d0f1c9c87197060: Status 404 returned error can't find the container with id efe28b822db8d99d38099b70f217108261222e01289341596d0f1c9c87197060 Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.484542 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:14.984518779 +0000 UTC m=+144.273950417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.526706 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" event={"ID":"d639ff25-343e-4e7c-bd2e-f5fc533923f4","Type":"ContainerStarted","Data":"c8a7034bee2eb267f1a3c1b5d1b92b9d227d88fb49542da1a2989e40d3218146"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.547379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" event={"ID":"b2d87932-1993-464d-b3d2-71025526e1f2","Type":"ContainerStarted","Data":"52c8c7565f7e76e2b1be9ed7ec38817313b26529cc4216d67909c7d5b38aadf8"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.548194 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.555775 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" event={"ID":"c8c1d3a6-23fd-4526-8892-0add23b09a9a","Type":"ContainerStarted","Data":"ca0ba083f2d897c6b2f519cbc9b73b7e76a6575165553e074c67d17692757e96"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.556643 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.560009 4873 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qvxgz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.560058 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.560728 4873 patch_prober.go:28] interesting pod/console-operator-58897d9998-dxcz7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.560770 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" podUID="b2d87932-1993-464d-b3d2-71025526e1f2" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.562318 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.563477 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.06345896 +0000 UTC m=+144.352890598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.574129 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kzpbf" event={"ID":"34f3caca-1b4c-493d-a10b-277b42d7ce72","Type":"ContainerStarted","Data":"ae45da8c938f4f64005c9bf6e462b1759bbbeee8b1b60b03c29e259f6c2bb43b"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.576864 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" event={"ID":"bd468f98-7720-4f9a-972f-684b96f4f90f","Type":"ContainerStarted","Data":"00d137182546ceb731d1231ff4489ff44e56001f5469f15e0d3bd78dd28af61d"} Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.577213 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.585343 4873 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qltqp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.585398 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.648703 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k"] Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.663848 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.665631 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.165612298 +0000 UTC m=+144.455043936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.766768 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.766961 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.266935952 +0000 UTC m=+144.556367590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.767023 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.767405 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.267392845 +0000 UTC m=+144.556824483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.867994 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.868365 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.368341369 +0000 UTC m=+144.657773007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.868464 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.868811 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.368798242 +0000 UTC m=+144.658229880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.969183 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.969332 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.469304941 +0000 UTC m=+144.758736569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:14 crc kubenswrapper[4873]: I0219 09:47:14.969477 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:14 crc kubenswrapper[4873]: E0219 09:47:14.969766 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.469753034 +0000 UTC m=+144.759184672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.072748 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.072962 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.572924672 +0000 UTC m=+144.862356310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.076467 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.076980 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.576965858 +0000 UTC m=+144.866397496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.127011 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.153164 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mv87q"] Feb 19 09:47:15 crc kubenswrapper[4873]: W0219 09:47:15.182636 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf905b5ea_71df_4b1c_997c_d68766bcfcfe.slice/crio-019acffae30ee36980fd8260d8a8299738a95c80eb08007a6c0478560261a038 WatchSource:0}: Error finding container 019acffae30ee36980fd8260d8a8299738a95c80eb08007a6c0478560261a038: Status 404 returned error can't find the container with id 019acffae30ee36980fd8260d8a8299738a95c80eb08007a6c0478560261a038 Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.183805 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.184260 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.684245564 +0000 UTC m=+144.973677202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.185346 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vklwp"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.272536 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmrn5"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.277488 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d75st"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.290141 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.290447 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.790432458 +0000 UTC m=+145.079864086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: W0219 09:47:15.290755 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40382b72_88a7_4f37_9192_a555a259d4bd.slice/crio-6345f8656fe0ece4ffd3ff39617df97541f890972726246e1b10a7ec73f814ea WatchSource:0}: Error finding container 6345f8656fe0ece4ffd3ff39617df97541f890972726246e1b10a7ec73f814ea: Status 404 returned error can't find the container with id 6345f8656fe0ece4ffd3ff39617df97541f890972726246e1b10a7ec73f814ea Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.292766 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt"] Feb 19 09:47:15 crc kubenswrapper[4873]: W0219 09:47:15.301601 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2877ec4c_7a3e_4105_ac87_6d096df10661.slice/crio-859ef3d4364f37b532f02704ca10a3e89ecac979d886e1164e57fb08f4febc07 WatchSource:0}: Error finding container 859ef3d4364f37b532f02704ca10a3e89ecac979d886e1164e57fb08f4febc07: Status 404 returned error can't find the container with id 859ef3d4364f37b532f02704ca10a3e89ecac979d886e1164e57fb08f4febc07 Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.346257 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2kgbd" podStartSLOduration=118.346241353 podStartE2EDuration="1m58.346241353s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.34267827 +0000 UTC m=+144.632109908" watchObservedRunningTime="2026-02-19 09:47:15.346241353 +0000 UTC m=+144.635672991" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.366944 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt5wx" podStartSLOduration=118.366927548 podStartE2EDuration="1m58.366927548s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.362369887 +0000 UTC m=+144.651801545" watchObservedRunningTime="2026-02-19 09:47:15.366927548 +0000 UTC m=+144.656359186" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.393653 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.394002 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.893984366 +0000 UTC m=+145.183416004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.411966 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 09:42:14 +0000 UTC, rotation deadline is 2026-12-07 09:21:22.756176769 +0000 UTC Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.412031 4873 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6983h34m7.344149114s for next certificate rotation Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.452739 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.453745 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jsc24" podStartSLOduration=118.453736525 podStartE2EDuration="1m58.453736525s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.44626747 +0000 UTC m=+144.735699108" watchObservedRunningTime="2026-02-19 09:47:15.453736525 +0000 UTC m=+144.743168163" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.463645 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv"] Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.482934 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" podStartSLOduration=118.482918014 podStartE2EDuration="1m58.482918014s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.482174542 +0000 UTC m=+144.771606190" watchObservedRunningTime="2026-02-19 09:47:15.482918014 +0000 UTC m=+144.772349652" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.495445 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.495860 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:15.995838045 +0000 UTC m=+145.285269673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.529160 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-shnwj" podStartSLOduration=118.529144883 podStartE2EDuration="1m58.529144883s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.52763081 +0000 UTC m=+144.817062458" watchObservedRunningTime="2026-02-19 09:47:15.529144883 +0000 UTC m=+144.818576521" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.563788 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" podStartSLOduration=118.563752319 podStartE2EDuration="1m58.563752319s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.561400991 +0000 UTC m=+144.850832629" watchObservedRunningTime="2026-02-19 09:47:15.563752319 +0000 UTC m=+144.853183967" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.597450 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.597831 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.097816379 +0000 UTC m=+145.387248017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.691272 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" podStartSLOduration=118.691254146 podStartE2EDuration="1m58.691254146s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.689170616 +0000 UTC m=+144.978602254" watchObservedRunningTime="2026-02-19 09:47:15.691254146 +0000 UTC m=+144.980685784" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.699389 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.699851 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.199835383 +0000 UTC m=+145.489267021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.719367 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" event={"ID":"12ef881d-885a-4215-bd57-27966cb209b8","Type":"ContainerStarted","Data":"458a152c636f20af08d4a1f9dd12949e038e645a9c738e6ffd5c5d62d089b63d"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.719408 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" event={"ID":"6f60efd0-54f5-43eb-b824-f8eaa836df60","Type":"ContainerStarted","Data":"4bb2a7cc8777b708c0b821e8f268c52655ec82f1a20a7fe49cae64f31ea580f5"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.719425 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2798g" event={"ID":"7a3637cc-cfef-446c-b0fb-f37f3396e0d7","Type":"ContainerStarted","Data":"1e0279da9306def009f2ea0d536091c5b2bdb0ca796b642f04f1a0af27a1ade2"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.722614 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-shnwj" event={"ID":"10aa25f4-7549-468a-b42f-19305ad066dd","Type":"ContainerStarted","Data":"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.739335 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" podStartSLOduration=118.739318498 podStartE2EDuration="1m58.739318498s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.734342645 +0000 UTC m=+145.023774283" watchObservedRunningTime="2026-02-19 09:47:15.739318498 +0000 UTC m=+145.028750136" Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.739753 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" event={"ID":"a5920bdb-afd9-401e-8f11-108a90660e1c","Type":"ContainerStarted","Data":"806f061284ee7c34be2928a152897dad977e6754cdfd324c860c76b9f536ce51"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.800059 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.803752 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.303731851 +0000 UTC m=+145.593163479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.820564 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" event={"ID":"40382b72-88a7-4f37-9192-a555a259d4bd","Type":"ContainerStarted","Data":"6345f8656fe0ece4ffd3ff39617df97541f890972726246e1b10a7ec73f814ea"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.897935 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" event={"ID":"2e3dce33-cc6d-41b5-ac17-481a98c06373","Type":"ContainerStarted","Data":"efe28b822db8d99d38099b70f217108261222e01289341596d0f1c9c87197060"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.903841 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:15 crc kubenswrapper[4873]: E0219 09:47:15.904158 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.404144709 +0000 UTC m=+145.693576347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.937656 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" event={"ID":"57d54c43-611a-40f1-b05e-9a0007dbe3ec","Type":"ContainerStarted","Data":"376202c24805e4276aabe8a4ebb3fff982636b00eaa4f2e0501988b21a7d953f"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.959869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" event={"ID":"ad63def1-18c4-4841-a936-b7c7e42ce092","Type":"ContainerStarted","Data":"65823cb2edab02571e8dd813d4387207d4d48977582b49633e0fcb68fdcf2590"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.966264 4873 generic.go:334] "Generic (PLEG): container finished" podID="9e9b2e26-976d-498c-88d8-dbddd520c9bf" containerID="124dc5c9ec08792dda6c444f4fb30401ef2bc5d4dac9d8ec3bd082febd153f0a" exitCode=0 Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.966325 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" event={"ID":"9e9b2e26-976d-498c-88d8-dbddd520c9bf","Type":"ContainerDied","Data":"124dc5c9ec08792dda6c444f4fb30401ef2bc5d4dac9d8ec3bd082febd153f0a"} Feb 19 09:47:15 crc kubenswrapper[4873]: I0219 09:47:15.966349 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" event={"ID":"9e9b2e26-976d-498c-88d8-dbddd520c9bf","Type":"ContainerStarted","Data":"516187f883da9b7d629e6b7b4719d3c614170b0d549730a990ec738141a2a94b"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.002872 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wfq9w" podStartSLOduration=119.002853178 podStartE2EDuration="1m59.002853178s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:15.999006857 +0000 UTC m=+145.288438495" watchObservedRunningTime="2026-02-19 09:47:16.002853178 +0000 UTC m=+145.292284816" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.004321 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.005466 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.505449353 +0000 UTC m=+145.794880991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.019541 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" event={"ID":"9324aa8b-fbce-42bb-b339-0aa2e382efd4","Type":"ContainerStarted","Data":"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.032753 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d75st" event={"ID":"2877ec4c-7a3e-4105-ac87-6d096df10661","Type":"ContainerStarted","Data":"859ef3d4364f37b532f02704ca10a3e89ecac979d886e1164e57fb08f4febc07"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.041527 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" event={"ID":"48911b55-fb42-412b-9298-4cba1105a164","Type":"ContainerStarted","Data":"002c07ff8b0050c7c6c05249b1dd992e03979ec683583e4b27531e24bb5b2562"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.054145 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" event={"ID":"de77b9aa-b558-4431-b116-5e1e1cc116f3","Type":"ContainerStarted","Data":"e60bc2f916aff75454f8db4d5b15c6ae005baebfdcb79c0c87df06d3a9db5142"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.076619 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" event={"ID":"d639ff25-343e-4e7c-bd2e-f5fc533923f4","Type":"ContainerStarted","Data":"75a0498ae11dea31201abc8dfdf3eb229ec353a4ecf145b8528f1fd23ab07a4e"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.091151 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" podStartSLOduration=119.091133197 podStartE2EDuration="1m59.091133197s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.075332943 +0000 UTC m=+145.364764581" watchObservedRunningTime="2026-02-19 09:47:16.091133197 +0000 UTC m=+145.380564835" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.093370 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" event={"ID":"4bf2ad48-6696-4f08-adc8-330fd4c25028","Type":"ContainerStarted","Data":"ab261c5d820be732f26ee439fe66ecdaa1b4d0ed012e3d18ace9fbd2e4de6b5a"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.105650 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.109264 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.609244538 +0000 UTC m=+145.898676286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.121511 4873 generic.go:334] "Generic (PLEG): container finished" podID="5968ec26-dea6-4e79-99b1-5954e173d226" containerID="610149f7289ffe0e19460275fe847c040900eefe73d678ea30ef2cab6c0695df" exitCode=0 Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.121593 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" event={"ID":"5968ec26-dea6-4e79-99b1-5954e173d226","Type":"ContainerDied","Data":"610149f7289ffe0e19460275fe847c040900eefe73d678ea30ef2cab6c0695df"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.123401 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" event={"ID":"c5e97ddb-b404-4ce2-b760-2739c36c755a","Type":"ContainerStarted","Data":"0fac0c7f9c572d1fe7c43463ff755f4edcdf7517898f1ac1c0bffad4086649e5"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.124209 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.134724 4873 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rpsnj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.134773 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" podUID="c5e97ddb-b404-4ce2-b760-2739c36c755a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.135755 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" event={"ID":"7671d99c-f025-4e36-b336-106655ec13ef","Type":"ContainerStarted","Data":"51d7fa54d22d6d20f4df5008e8bc0c33ffeb62a7704a6f583a20f644788356c3"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.161479 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" event={"ID":"f69ad03d-7d61-4b31-a556-325751fcba8e","Type":"ContainerStarted","Data":"af0ed8263d6c2f7d43d615058f9b59d5e06528f281061126ed76bc142d45ab55"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.181984 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" event={"ID":"bd468f98-7720-4f9a-972f-684b96f4f90f","Type":"ContainerStarted","Data":"444f0cf49cee0a425b45e35836ef80b67c64300d173966d869b2bf6f32c4f2d2"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.200080 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" event={"ID":"06e4a751-614f-49d2-8246-c76419d1ccb4","Type":"ContainerStarted","Data":"19ed82ac93e13dbda08f17685ab76cf45a36618aa1b5cdd0f6ed8debf558ed56"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.200140 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" event={"ID":"06e4a751-614f-49d2-8246-c76419d1ccb4","Type":"ContainerStarted","Data":"7f930d77d6c1b794c971625c5385cd1fdd036f11c2a23b5d63fabb2ad61b5233"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.201772 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.202238 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.205598 4873 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dg6jw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.205748 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" podUID="06e4a751-614f-49d2-8246-c76419d1ccb4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.206448 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.207707 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.707690749 +0000 UTC m=+145.997122387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.232421 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9pq25" event={"ID":"e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e","Type":"ContainerStarted","Data":"fe3c9d0e10d4bd754ef0e8d8f7d4d741dab93ae4e01beee7ad2c4e1fb5e7655c"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.232459 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9pq25" event={"ID":"e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e","Type":"ContainerStarted","Data":"0daa5ed7cee4ce149fc32b28c1540ff717542388480b099848cac564d411d3c5"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.233290 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.244174 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.244280 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.245961 4873 generic.go:334] "Generic (PLEG): container finished" podID="3c4f7134-312f-4f1d-a344-80d44d65c371" containerID="1bb88329deecbdc4f73b8c08c936a5a9b759033861f38d2ad6d0f8a5bfa48630" exitCode=0 Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.246071 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" event={"ID":"3c4f7134-312f-4f1d-a344-80d44d65c371","Type":"ContainerDied","Data":"1bb88329deecbdc4f73b8c08c936a5a9b759033861f38d2ad6d0f8a5bfa48630"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.271645 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" event={"ID":"829eb540-5f77-4748-a99d-c5bdbd13c26f","Type":"ContainerStarted","Data":"b45796554170d7263870c3f2f726d814086aaeea96971eab4c58f1dc507e6dd8"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.296952 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" event={"ID":"4b881e81-67ed-4c33-a992-da59d7996b9d","Type":"ContainerStarted","Data":"3e50b8b153bf2d7257ba5ed2d4d11a2070ac6405ee8e2a20e920dbe2fa585782"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.318179 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.318705 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.818690622 +0000 UTC m=+146.108122260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.329677 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" event={"ID":"b2d87932-1993-464d-b3d2-71025526e1f2","Type":"ContainerStarted","Data":"e9b65aa400ac589e45085a45d51a94b9442ac42995dbdbdf2ff5aed61ba87bd4"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.336382 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" podStartSLOduration=119.33636425 podStartE2EDuration="1m59.33636425s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.279837675 +0000 UTC m=+145.569269313" watchObservedRunningTime="2026-02-19 09:47:16.33636425 +0000 UTC m=+145.625795888" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.372058 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" podStartSLOduration=119.372039446 podStartE2EDuration="1m59.372039446s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.335175176 +0000 UTC m=+145.624606824" watchObservedRunningTime="2026-02-19 09:47:16.372039446 +0000 UTC m=+145.661471084" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.387936 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mv87q" event={"ID":"176cb3ad-1201-420f-bdb2-586f974aeaf2","Type":"ContainerStarted","Data":"d8d905c4fcc5875c73c675b7e5a5b70379e42bdf043c48cb30f8359b1f025c27"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.419535 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.420489 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" event={"ID":"79bb3a49-346f-49b7-bb8e-c358105f8035","Type":"ContainerStarted","Data":"895da616065009c7730ac4dd615ae2f8e29d586af3aa212e7cc0452f645aa23d"} Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.421491 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:16.921470348 +0000 UTC m=+146.210901986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.445532 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" podStartSLOduration=119.44551689 podStartE2EDuration="1m59.44551689s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.408278789 +0000 UTC m=+145.697710427" watchObservedRunningTime="2026-02-19 09:47:16.44551689 +0000 UTC m=+145.734948528" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.501070 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s67xb" podStartSLOduration=119.501030997 podStartE2EDuration="1m59.501030997s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.444073218 +0000 UTC m=+145.733504856" watchObservedRunningTime="2026-02-19 09:47:16.501030997 +0000 UTC m=+145.790462635" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.503933 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" event={"ID":"f905b5ea-71df-4b1c-997c-d68766bcfcfe","Type":"ContainerStarted","Data":"019acffae30ee36980fd8260d8a8299738a95c80eb08007a6c0478560261a038"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.504424 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.505272 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.513762 4873 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-86hhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.513829 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.521166 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.522917 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.022894095 +0000 UTC m=+146.312325733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.537521 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kzpbf" event={"ID":"34f3caca-1b4c-493d-a10b-277b42d7ce72","Type":"ContainerStarted","Data":"b309e65f91adb4e60631ce7b7b48f8032fc701bb11775ba8684ecc4fb05b7104"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.541257 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9pq25" podStartSLOduration=119.541232683 podStartE2EDuration="1m59.541232683s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.503819387 +0000 UTC m=+145.793251025" watchObservedRunningTime="2026-02-19 09:47:16.541232683 +0000 UTC m=+145.830664321" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.563571 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" event={"ID":"5288b888-1b48-4590-8d10-f3688ba87a41","Type":"ContainerStarted","Data":"bb50b1fbd6a6070765941c78978a2ace71c0cb9cfcbff51b47f3f5a81c2d5454"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.595192 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" podStartSLOduration=119.595172424 podStartE2EDuration="1m59.595172424s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.542303284 +0000 UTC m=+145.831734922" watchObservedRunningTime="2026-02-19 09:47:16.595172424 +0000 UTC m=+145.884604062" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.608308 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p97g8" event={"ID":"bf2de5cd-4280-4c0c-9276-b693a51986b7","Type":"ContainerStarted","Data":"b00108239b20276ebdb4aea3d0da30d4c829283729a58c6f65febe4073113c18"} Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.623187 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.625733 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.125702352 +0000 UTC m=+146.415133990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.628031 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.628249 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.631269 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.131233581 +0000 UTC m=+146.420665219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.651647 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bxfwb" podStartSLOduration=119.651625858 podStartE2EDuration="1m59.651625858s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.63500622 +0000 UTC m=+145.924437858" watchObservedRunningTime="2026-02-19 09:47:16.651625858 +0000 UTC m=+145.941057496" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.660613 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" podStartSLOduration=119.660593796 podStartE2EDuration="1m59.660593796s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.658345071 +0000 UTC m=+145.947776709" watchObservedRunningTime="2026-02-19 09:47:16.660593796 +0000 UTC m=+145.950025434" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.711400 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f24fn" podStartSLOduration=119.711381687 podStartE2EDuration="1m59.711381687s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.709552694 +0000 UTC m=+145.998984332" watchObservedRunningTime="2026-02-19 09:47:16.711381687 +0000 UTC m=+146.000813325" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.742600 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.745805 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.245783276 +0000 UTC m=+146.535214914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.828475 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" podStartSLOduration=119.828461654 podStartE2EDuration="1m59.828461654s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.827307181 +0000 UTC m=+146.116738819" watchObservedRunningTime="2026-02-19 09:47:16.828461654 +0000 UTC m=+146.117893292" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.853797 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.854086 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.354073911 +0000 UTC m=+146.643505549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.885155 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dxcz7" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.939862 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kzpbf" podStartSLOduration=119.939843227 podStartE2EDuration="1m59.939843227s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.884731202 +0000 UTC m=+146.174162830" watchObservedRunningTime="2026-02-19 09:47:16.939843227 +0000 UTC m=+146.229274865" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.940780 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-p97g8" podStartSLOduration=6.940772364 podStartE2EDuration="6.940772364s" podCreationTimestamp="2026-02-19 09:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:16.938501649 +0000 UTC m=+146.227933287" watchObservedRunningTime="2026-02-19 09:47:16.940772364 +0000 UTC m=+146.230204002" Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.954262 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.954397 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.454357875 +0000 UTC m=+146.743789513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:16 crc kubenswrapper[4873]: I0219 09:47:16.954867 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:16 crc kubenswrapper[4873]: E0219 09:47:16.955175 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.455164468 +0000 UTC m=+146.744596106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.055641 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.056056 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.556036849 +0000 UTC m=+146.845468487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.144305 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-k627b" podStartSLOduration=120.144280077 podStartE2EDuration="2m0.144280077s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.067737666 +0000 UTC m=+146.357169304" watchObservedRunningTime="2026-02-19 09:47:17.144280077 +0000 UTC m=+146.433711835" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.160062 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.170919 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.670904703 +0000 UTC m=+146.960336341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.275441 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.276327 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.776311175 +0000 UTC m=+147.065742813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.276437 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.276765 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.776757598 +0000 UTC m=+147.066189236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.340938 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.345255 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:17 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:17 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:17 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.345307 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.379609 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.380008 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.879994437 +0000 UTC m=+147.169426075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.485768 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.486090 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:17.986078268 +0000 UTC m=+147.275509906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.587047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.587244 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.087189996 +0000 UTC m=+147.376621634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.587360 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.587658 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.087649869 +0000 UTC m=+147.377081507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.624938 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" event={"ID":"f69ad03d-7d61-4b31-a556-325751fcba8e","Type":"ContainerStarted","Data":"fa4571a1c2fe413379ed2f470627ac024dbe416e16701b8dcc3c2fb35ad01141"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.629812 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" event={"ID":"6f60efd0-54f5-43eb-b824-f8eaa836df60","Type":"ContainerStarted","Data":"392ce3888e876dbb9146823d117c34dc5c175049e9bcfec4f3edbea66c9b69ed"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.631543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mv87q" event={"ID":"176cb3ad-1201-420f-bdb2-586f974aeaf2","Type":"ContainerStarted","Data":"f74f6843c1c49ef8b307afe5321bc88322055abf9a9d42846230996638bf2c65"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.631571 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mv87q" event={"ID":"176cb3ad-1201-420f-bdb2-586f974aeaf2","Type":"ContainerStarted","Data":"7f41265a174b3dc391e4997ed1e70e57d84c83bcecf16779a1a2993039c63413"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.631915 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.638954 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" event={"ID":"40382b72-88a7-4f37-9192-a555a259d4bd","Type":"ContainerStarted","Data":"ccb8fd8db4e255b70df5d9413f28212b56be1ea72e4fff568eb751bdc42dfc01"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.641635 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" event={"ID":"5968ec26-dea6-4e79-99b1-5954e173d226","Type":"ContainerStarted","Data":"2f1c16050d17f4f8ab49a62bd3cb2abaa8f050fa7baa896ed8a68aa0dd07562d"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.641661 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" event={"ID":"5968ec26-dea6-4e79-99b1-5954e173d226","Type":"ContainerStarted","Data":"654bd2e085d743a5b80f67faf8bdf89a529f19e9ca993c07d50c98f152e9aa67"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.645614 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" event={"ID":"f905b5ea-71df-4b1c-997c-d68766bcfcfe","Type":"ContainerStarted","Data":"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.646394 4873 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-86hhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.646432 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.650074 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" event={"ID":"5b283da7-d736-4ac2-a290-e142728e838a","Type":"ContainerStarted","Data":"c569c9f64f8efa230aabda049b973c4e298eb4c5e841da7ae5334645e79904a0"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.650123 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" event={"ID":"5b283da7-d736-4ac2-a290-e142728e838a","Type":"ContainerStarted","Data":"c531a0594996749d18410ed12fcc090b589c886b5c87dc00cd63928a8107091d"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.650905 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.652001 4873 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2b5f5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.652027 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" podUID="5b283da7-d736-4ac2-a290-e142728e838a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.652611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lkp4m" event={"ID":"5288b888-1b48-4590-8d10-f3688ba87a41","Type":"ContainerStarted","Data":"03861aebdd7aa4f810bd47019d84f0fbf571701bfa0a6961b9463f32e15fe45a"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.664569 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kx6gv" podStartSLOduration=120.664556521 podStartE2EDuration="2m0.664556521s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.662376918 +0000 UTC m=+146.951808556" watchObservedRunningTime="2026-02-19 09:47:17.664556521 +0000 UTC m=+146.953988159" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.677043 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" event={"ID":"21aad9a0-00de-4f42-9923-6c66c79a3a8d","Type":"ContainerStarted","Data":"baaed203c6a1d4bd0f264963f01d1e8ced2bc86f52c8d93ed1609744e131701b"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.677090 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" event={"ID":"21aad9a0-00de-4f42-9923-6c66c79a3a8d","Type":"ContainerStarted","Data":"7f4dc70a8d22023b3b32a54af9716b31d1084b495da67830931010fc1b0bb8f4"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.677117 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" event={"ID":"21aad9a0-00de-4f42-9923-6c66c79a3a8d","Type":"ContainerStarted","Data":"c4b052a5483678d753afdf1aa3785ac25ddbd83ec2a60fa86c3a96264b652428"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.679403 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" event={"ID":"48911b55-fb42-412b-9298-4cba1105a164","Type":"ContainerStarted","Data":"ebfd2ce7872ae12742424aef8df0bbf397b8040075ca34c0cf06f9da9fbe9224"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.684521 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p97g8" event={"ID":"bf2de5cd-4280-4c0c-9276-b693a51986b7","Type":"ContainerStarted","Data":"bb48a06737620a3a4beca3c7b98ccea529eaf53424fb7b0b83507caf6b582baa"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.686524 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" event={"ID":"c5e97ddb-b404-4ce2-b760-2739c36c755a","Type":"ContainerStarted","Data":"ffe2d8ab5cb76d62e3535aa09dfd53d0d6b32bd77ad717ce28b04abed265c514"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.688342 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.689870 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.189828408 +0000 UTC m=+147.479260046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.714994 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" event={"ID":"2e3dce33-cc6d-41b5-ac17-481a98c06373","Type":"ContainerStarted","Data":"95208c3f7ae6ed0e359d624f2cd6b3ec4a43078b3764df7d20384bfb957edcc1"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.722546 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" event={"ID":"4bf2ad48-6696-4f08-adc8-330fd4c25028","Type":"ContainerStarted","Data":"b9e30abb2510dc566b3639aefe80d5895fa436bf9379a2c609ddc205a82604d5"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.722588 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" event={"ID":"4bf2ad48-6696-4f08-adc8-330fd4c25028","Type":"ContainerStarted","Data":"d28d10ce9f03794e11a23fa90d1fcd426a1fb2fd80ce53230a0b702bec894f6b"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.723178 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.724482 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" event={"ID":"12ef881d-885a-4215-bd57-27966cb209b8","Type":"ContainerStarted","Data":"6de934514d24c0fc178989594dd8ef80750809aa82f25b04f2ef7016dfbda7bd"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.725896 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2798g" event={"ID":"7a3637cc-cfef-446c-b0fb-f37f3396e0d7","Type":"ContainerStarted","Data":"33e766298c4ac8a82b60e485ee3da37b8128c024f19904586311634509db0398"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.738436 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" event={"ID":"3c4f7134-312f-4f1d-a344-80d44d65c371","Type":"ContainerStarted","Data":"5194f241d142d3bedc938db89f1735ed15cb997da623f56b217c30c74cbd31cd"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.738973 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.753619 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" event={"ID":"57d54c43-611a-40f1-b05e-9a0007dbe3ec","Type":"ContainerStarted","Data":"470243dcfcb35d2f3e0759c613462fd35c7e2f8c39ef376ee7b0a205fdc17b16"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.756633 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.784765 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vklwp" podStartSLOduration=120.784722237 podStartE2EDuration="2m0.784722237s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.766450512 +0000 UTC m=+147.055882150" watchObservedRunningTime="2026-02-19 09:47:17.784722237 +0000 UTC m=+147.074153875" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.792930 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.793455 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.293426068 +0000 UTC m=+147.582857706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.808951 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.809144 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.809166 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tjxkj" event={"ID":"829eb540-5f77-4748-a99d-c5bdbd13c26f","Type":"ContainerStarted","Data":"d5d942e672ae021105b11e63ce43102f38fad5f5cbaeb996e1aaa544f1b300a6"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.809591 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.815542 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.816300 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.825957 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" event={"ID":"7671d99c-f025-4e36-b336-106655ec13ef","Type":"ContainerStarted","Data":"6c62cffe8c256d15bc1fb13f0a13cbf87c7055d0df2d95affa4598eec4d2f960"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.826005 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" event={"ID":"7671d99c-f025-4e36-b336-106655ec13ef","Type":"ContainerStarted","Data":"dac674c05caf9ed80f36d48640d046817e30c59e1940a881e81742bf1276623b"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.830180 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" event={"ID":"a5920bdb-afd9-401e-8f11-108a90660e1c","Type":"ContainerStarted","Data":"8fb4855d36dbe8c0caf6f5c5cca86c812b9220cdc618cae25d2cd061754b51ce"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.830212 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" event={"ID":"a5920bdb-afd9-401e-8f11-108a90660e1c","Type":"ContainerStarted","Data":"01ef2b0c84c18ace3347b5260941e99078f586f3684f9c7da48796b5cf1cf72d"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.835013 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.862603 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d75st" event={"ID":"2877ec4c-7a3e-4105-ac87-6d096df10661","Type":"ContainerStarted","Data":"6b5be3c9aaaeafbac09b7a0ad8e8084439b6283e4f97911b7e226afb7fa36061"} Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.863493 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9hwg5" podStartSLOduration=120.863477193 podStartE2EDuration="2m0.863477193s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.862661659 +0000 UTC m=+147.152093297" watchObservedRunningTime="2026-02-19 09:47:17.863477193 +0000 UTC m=+147.152908831" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.864312 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.864353 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.883292 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7bgm9" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.901937 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dg6jw" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.902718 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:17 crc kubenswrapper[4873]: E0219 09:47:17.903710 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.403674459 +0000 UTC m=+147.693106097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.914304 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" podStartSLOduration=120.914281744 podStartE2EDuration="2m0.914281744s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.907314373 +0000 UTC m=+147.196746011" watchObservedRunningTime="2026-02-19 09:47:17.914281744 +0000 UTC m=+147.203713382" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.925652 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.928599 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.932246 4873 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gbzll container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.932311 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" podUID="5968ec26-dea6-4e79-99b1-5954e173d226" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 19 09:47:17 crc kubenswrapper[4873]: I0219 09:47:17.942275 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mv87q" podStartSLOduration=7.942259098 podStartE2EDuration="7.942259098s" podCreationTimestamp="2026-02-19 09:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.941894198 +0000 UTC m=+147.231325836" watchObservedRunningTime="2026-02-19 09:47:17.942259098 +0000 UTC m=+147.231690736" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.006169 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.008602 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.508588106 +0000 UTC m=+147.798019744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.008629 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.008742 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.009004 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl6f\" (UniqueName: \"kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.021778 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" podStartSLOduration=121.021762445 podStartE2EDuration="2m1.021762445s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:17.977339057 +0000 UTC m=+147.266770705" watchObservedRunningTime="2026-02-19 09:47:18.021762445 +0000 UTC m=+147.311194083" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.022963 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" podStartSLOduration=121.022956469 podStartE2EDuration="2m1.022956469s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.019035297 +0000 UTC m=+147.308466935" watchObservedRunningTime="2026-02-19 09:47:18.022956469 +0000 UTC m=+147.312388107" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.056399 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5tt6k" podStartSLOduration=121.056382471 podStartE2EDuration="2m1.056382471s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.0549686 +0000 UTC m=+147.344400238" watchObservedRunningTime="2026-02-19 09:47:18.056382471 +0000 UTC m=+147.345814109" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.109684 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.109953 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.109996 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.110042 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sl6f\" (UniqueName: \"kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.110462 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.610447646 +0000 UTC m=+147.899879284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.110819 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.111024 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.142303 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8cxf7" podStartSLOduration=121.142287202 podStartE2EDuration="2m1.142287202s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.102853757 +0000 UTC m=+147.392285395" watchObservedRunningTime="2026-02-19 09:47:18.142287202 +0000 UTC m=+147.431718840" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.166560 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sl6f\" (UniqueName: \"kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f\") pod \"community-operators-jm66x\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.185885 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" podStartSLOduration=121.185858875 podStartE2EDuration="2m1.185858875s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.177469523 +0000 UTC m=+147.466901161" watchObservedRunningTime="2026-02-19 09:47:18.185858875 +0000 UTC m=+147.475290513" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.195421 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.196779 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.205840 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.211473 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.211978 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.711967106 +0000 UTC m=+148.001398744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.243686 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.243763 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.289201 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qmrn5" podStartSLOduration=121.289171416 podStartE2EDuration="2m1.289171416s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.286716135 +0000 UTC m=+147.576147773" watchObservedRunningTime="2026-02-19 09:47:18.289171416 +0000 UTC m=+147.578603054" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.312646 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.312854 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.812813806 +0000 UTC m=+148.102245444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.313088 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.313174 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.313204 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.313236 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4qh\" (UniqueName: \"kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.313508 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.813500756 +0000 UTC m=+148.102932394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.327722 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hs5fr" podStartSLOduration=121.327702284 podStartE2EDuration="2m1.327702284s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.326830189 +0000 UTC m=+147.616261827" watchObservedRunningTime="2026-02-19 09:47:18.327702284 +0000 UTC m=+147.617133922" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.343684 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:18 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:18 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:18 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.343735 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.356840 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-24gcv" podStartSLOduration=121.356824942 podStartE2EDuration="2m1.356824942s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.354811994 +0000 UTC m=+147.644243632" watchObservedRunningTime="2026-02-19 09:47:18.356824942 +0000 UTC m=+147.646256580" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.413700 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.414212 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.414338 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc4qh\" (UniqueName: \"kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.414504 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.414907 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:18.914890462 +0000 UTC m=+148.204322100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.415474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.415492 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.431649 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mwl9k" podStartSLOduration=121.431610053 podStartE2EDuration="2m1.431610053s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.42768721 +0000 UTC m=+147.717118848" watchObservedRunningTime="2026-02-19 09:47:18.431610053 +0000 UTC m=+147.721041691" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.435966 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.463642 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc4qh\" (UniqueName: \"kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh\") pod \"community-operators-5fj2x\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.514606 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.515704 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.516194 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.016168734 +0000 UTC m=+148.305600372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.575955 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2798g" podStartSLOduration=8.575935123 podStartE2EDuration="8.575935123s" podCreationTimestamp="2026-02-19 09:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.477977295 +0000 UTC m=+147.767408933" watchObservedRunningTime="2026-02-19 09:47:18.575935123 +0000 UTC m=+147.865366751" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.578186 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.579229 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pzspc" podStartSLOduration=121.579217817 podStartE2EDuration="2m1.579217817s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:18.578158047 +0000 UTC m=+147.867589685" watchObservedRunningTime="2026-02-19 09:47:18.579217817 +0000 UTC m=+147.868649455" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.580574 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.615420 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.617883 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.618210 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.118194748 +0000 UTC m=+148.407626386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.642358 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.693188 4873 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rpsnj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.693248 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" podUID="c5e97ddb-b404-4ce2-b760-2739c36c755a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.721088 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnz6w\" (UniqueName: \"kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.721326 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.721380 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.721416 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.722124 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.722995 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.733256 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.733648 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.233633498 +0000 UTC m=+148.523065136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835582 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835800 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnz6w\" (UniqueName: \"kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835874 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835920 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835940 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6xh\" (UniqueName: \"kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.835958 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.836358 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.836555 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.836929 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.336909939 +0000 UTC m=+148.626341577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.866047 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnz6w\" (UniqueName: \"kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w\") pod \"certified-operators-tnf24\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.901146 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d75st" event={"ID":"2877ec4c-7a3e-4105-ac87-6d096df10661","Type":"ContainerStarted","Data":"149b1f49fa4e1ae8aa9adfb073381e6eff152bf0f74ef70ed4423616a7ab8487"} Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.902679 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.902724 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.913355 4873 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-86hhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.913412 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.916404 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2b5f5" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.930504 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.938753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.938831 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.938862 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.938880 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq6xh\" (UniqueName: \"kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: E0219 09:47:18.939375 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.439362836 +0000 UTC m=+148.728794474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.939692 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.948352 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:18 crc kubenswrapper[4873]: I0219 09:47:18.989141 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq6xh\" (UniqueName: \"kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh\") pod \"certified-operators-8mch8\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.040719 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.053755 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpsnj" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.054265 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.55424546 +0000 UTC m=+148.843677098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.096388 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.144603 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.145317 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.645302149 +0000 UTC m=+148.934733787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.166429 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:47:19 crc kubenswrapper[4873]: W0219 09:47:19.219608 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d58373_fe5d_4afe_9da1_256843164ff4.slice/crio-81841dde96f0bd4c162af34dbaad80f9410851d99d1c9f71453d6997e197a658 WatchSource:0}: Error finding container 81841dde96f0bd4c162af34dbaad80f9410851d99d1c9f71453d6997e197a658: Status 404 returned error can't find the container with id 81841dde96f0bd4c162af34dbaad80f9410851d99d1c9f71453d6997e197a658 Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.247712 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.248275 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.7482589 +0000 UTC m=+149.037690538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.347518 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:19 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:19 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:19 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.347577 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.352743 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.352799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.352821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.352862 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.352888 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.356746 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.85673172 +0000 UTC m=+149.146163358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.357295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.360965 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.374684 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.376690 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.454397 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.454789 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:19.95477262 +0000 UTC m=+149.244204258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.511340 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.519349 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.531365 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.556978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.557593 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.057580777 +0000 UTC m=+149.347012405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.603189 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.659592 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.659953 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.159936651 +0000 UTC m=+149.449368279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.747345 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.763562 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.763909 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.26389376 +0000 UTC m=+149.553325398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.870872 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.871243 4873 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.871341 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.37132351 +0000 UTC m=+149.660755148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.871901 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.872411 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.372378861 +0000 UTC m=+149.661810499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.913395 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerStarted","Data":"49ebe6c3ea35eaecd163d7a7c155a22151d195a56ce773049fc5f4d9fdced9e7"} Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.940387 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d75st" event={"ID":"2877ec4c-7a3e-4105-ac87-6d096df10661","Type":"ContainerStarted","Data":"6e0b08e9eb979cfad9507bc13238bb0d9f9a803b6e9406da1617ac3758ae9062"} Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.953712 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerStarted","Data":"3d229a1d7483ee232f5190406e28ea1aa38e3259959252fbb620deb657e8a447"} Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.971285 4873 generic.go:334] "Generic (PLEG): container finished" podID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerID="c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a" exitCode=0 Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.972748 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerDied","Data":"c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a"} Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.972776 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerStarted","Data":"81841dde96f0bd4c162af34dbaad80f9410851d99d1c9f71453d6997e197a658"} Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.974580 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.975429 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.975697 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.475683402 +0000 UTC m=+149.765115040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:19 crc kubenswrapper[4873]: I0219 09:47:19.975757 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:19 crc kubenswrapper[4873]: E0219 09:47:19.976028 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.476020982 +0000 UTC m=+149.765452620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.049047 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.090496 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.091895 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.591878454 +0000 UTC m=+149.881310092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.193963 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.194689 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.69467762 +0000 UTC m=+149.984109258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.295810 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.295968 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.795941193 +0000 UTC m=+150.085372841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.296430 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.296698 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.796688975 +0000 UTC m=+150.086120613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.358394 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:20 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:20 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:20 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.358458 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.399658 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.399760 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.899743719 +0000 UTC m=+150.189175357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.399965 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.400265 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:20.900258423 +0000 UTC m=+150.189690061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.500517 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.500769 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.000755704 +0000 UTC m=+150.290187342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.514706 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.515651 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.520391 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.523722 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.604440 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gtj\" (UniqueName: \"kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.604493 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.604518 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.604545 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.604777 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.104765735 +0000 UTC m=+150.394197373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: W0219 09:47:20.701353 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d84a6c10865f58fa1503155da372265852abc2c6e32f99fd581b130b7608c666 WatchSource:0}: Error finding container d84a6c10865f58fa1503155da372265852abc2c6e32f99fd581b130b7608c666: Status 404 returned error can't find the container with id d84a6c10865f58fa1503155da372265852abc2c6e32f99fd581b130b7608c666 Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.705323 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.705487 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.205463462 +0000 UTC m=+150.494895100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.705596 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gtj\" (UniqueName: \"kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.705660 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.705695 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.705725 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.706079 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.206063839 +0000 UTC m=+150.495495477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.706152 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.706285 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.722284 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gtj\" (UniqueName: \"kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj\") pod \"redhat-marketplace-hv2j6\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.806586 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.806731 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.306710464 +0000 UTC m=+150.596142102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.806955 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:20 crc kubenswrapper[4873]: E0219 09:47:20.807276 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:47:21.30726852 +0000 UTC m=+150.596700158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7hhjq" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.810335 4873 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T09:47:19.871863236Z","Handler":null,"Name":""} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.829947 4873 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.830206 4873 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.908046 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.908476 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.909598 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.917325 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.921228 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.922921 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.981431 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6214ae4158cd5843d599b33ccedfce40761d283932f955047b85b89a2124b12b"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.981824 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d84a6c10865f58fa1503155da372265852abc2c6e32f99fd581b130b7608c666"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.985270 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d75st" event={"ID":"2877ec4c-7a3e-4105-ac87-6d096df10661","Type":"ContainerStarted","Data":"67d33499573771b5affe82793ac6ad4acb85ff82981999ff3f3e82db0366b27b"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.986734 4873 generic.go:334] "Generic (PLEG): container finished" podID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerID="359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda" exitCode=0 Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.986806 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerDied","Data":"359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.989379 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"76add2a3caef6b0ae06b59fafacc47e913c53f7e141d4a94c87f48a39f282b65"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.989409 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4971f0d3a202127387c09395d986632ab538b23ebe11137728b46f41aea190a7"} Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.989603 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.996334 4873 generic.go:334] "Generic (PLEG): container finished" podID="de77b9aa-b558-4431-b116-5e1e1cc116f3" containerID="e60bc2f916aff75454f8db4d5b15c6ae005baebfdcb79c0c87df06d3a9db5142" exitCode=0 Feb 19 09:47:20 crc kubenswrapper[4873]: I0219 09:47:20.996414 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" event={"ID":"de77b9aa-b558-4431-b116-5e1e1cc116f3","Type":"ContainerDied","Data":"e60bc2f916aff75454f8db4d5b15c6ae005baebfdcb79c0c87df06d3a9db5142"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:20.998950 4873 generic.go:334] "Generic (PLEG): container finished" podID="061e8672-31d8-48ec-87fc-158e44af91e4" containerID="0ca6133aff5d54bab32e361a7911c1c5856dd641828cf1fe1361309a8d03164b" exitCode=0 Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:20.999012 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerDied","Data":"0ca6133aff5d54bab32e361a7911c1c5856dd641828cf1fe1361309a8d03164b"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:20.999039 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerStarted","Data":"4f91cfd7b327a1281f11c3570024804d58c90780fc2519b06ceb03c7886c6273"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.010184 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9f6\" (UniqueName: \"kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.010257 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.010418 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.010480 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.012901 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b06ae8c6cdc1344d37140534f4318d41b0912691854083bec1921e704d22d1ff"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.012945 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c751a6335010bd4fc9b0866e320452a8a243ea2f63b63cf49f21c2ab6542e8f6"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.015761 4873 generic.go:334] "Generic (PLEG): container finished" podID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerID="ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210" exitCode=0 Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.015895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerDied","Data":"ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210"} Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.042191 4873 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.042232 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.064478 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-d75st" podStartSLOduration=11.064452007 podStartE2EDuration="11.064452007s" podCreationTimestamp="2026-02-19 09:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:21.062289124 +0000 UTC m=+150.351720772" watchObservedRunningTime="2026-02-19 09:47:21.064452007 +0000 UTC m=+150.353883655" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.112340 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.112422 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.112481 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9f6\" (UniqueName: \"kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.113508 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.117825 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.149259 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9f6\" (UniqueName: \"kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6\") pod \"redhat-marketplace-2jgk6\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.156009 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7hhjq\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.192755 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.227415 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.239834 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.313999 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.315634 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.322277 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.326754 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.346471 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:21 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:21 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:21 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.346525 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.415375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.415425 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gph4z\" (UniqueName: \"kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.415493 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.481787 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.512666 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.514648 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.515893 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.524456 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.524879 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.524921 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gph4z\" (UniqueName: \"kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.525036 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.525463 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.525781 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.525925 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: W0219 09:47:21.526038 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode767e90e_5146_4f1e_9f0b_5f5acb185429.slice/crio-2314d422015de89852637bb24195891fe7f3f4631802a8cc5426a4f84f3df229 WatchSource:0}: Error finding container 2314d422015de89852637bb24195891fe7f3f4631802a8cc5426a4f84f3df229: Status 404 returned error can't find the container with id 2314d422015de89852637bb24195891fe7f3f4631802a8cc5426a4f84f3df229 Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.556393 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gph4z\" (UniqueName: \"kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z\") pod \"redhat-operators-gjn8l\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: W0219 09:47:21.563349 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2948a5a7_4d94_4314_acdf_489dd93609b9.slice/crio-9186481593e0db9c07ae375e1f7f148954394edd55d25a1feea71b13835f9c08 WatchSource:0}: Error finding container 9186481593e0db9c07ae375e1f7f148954394edd55d25a1feea71b13835f9c08: Status 404 returned error can't find the container with id 9186481593e0db9c07ae375e1f7f148954394edd55d25a1feea71b13835f9c08 Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.625806 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.625861 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.626071 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zms24\" (UniqueName: \"kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.669710 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.728629 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.728683 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.728731 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zms24\" (UniqueName: \"kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.729304 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.729397 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.766077 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zms24\" (UniqueName: \"kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24\") pod \"redhat-operators-dzcdv\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:21 crc kubenswrapper[4873]: I0219 09:47:21.868408 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.030977 4873 generic.go:334] "Generic (PLEG): container finished" podID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerID="3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6" exitCode=0 Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.031168 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerDied","Data":"3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.031681 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerStarted","Data":"2314d422015de89852637bb24195891fe7f3f4631802a8cc5426a4f84f3df229"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.035047 4873 generic.go:334] "Generic (PLEG): container finished" podID="0954690a-09f0-4b1b-be57-db87e9304488" containerID="8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b" exitCode=0 Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.035227 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerDied","Data":"8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.035271 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerStarted","Data":"edab2539f7fc8755b323d06c9cc87b6333d411f7bbacd04da485c28f244826a3"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.044983 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" event={"ID":"2948a5a7-4d94-4314-acdf-489dd93609b9","Type":"ContainerStarted","Data":"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.045024 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.045038 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" event={"ID":"2948a5a7-4d94-4314-acdf-489dd93609b9","Type":"ContainerStarted","Data":"9186481593e0db9c07ae375e1f7f148954394edd55d25a1feea71b13835f9c08"} Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.103988 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" podStartSLOduration=125.103960894 podStartE2EDuration="2m5.103960894s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:22.103119709 +0000 UTC m=+151.392551347" watchObservedRunningTime="2026-02-19 09:47:22.103960894 +0000 UTC m=+151.393392532" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.168628 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.261341 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gk5mg" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.346910 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:22 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:22 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:22 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.347206 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.422531 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.482153 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.551177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj2jk\" (UniqueName: \"kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk\") pod \"de77b9aa-b558-4431-b116-5e1e1cc116f3\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.551690 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume\") pod \"de77b9aa-b558-4431-b116-5e1e1cc116f3\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.551787 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume\") pod \"de77b9aa-b558-4431-b116-5e1e1cc116f3\" (UID: \"de77b9aa-b558-4431-b116-5e1e1cc116f3\") " Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.552499 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "de77b9aa-b558-4431-b116-5e1e1cc116f3" (UID: "de77b9aa-b558-4431-b116-5e1e1cc116f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.552634 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de77b9aa-b558-4431-b116-5e1e1cc116f3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.561534 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk" (OuterVolumeSpecName: "kube-api-access-nj2jk") pod "de77b9aa-b558-4431-b116-5e1e1cc116f3" (UID: "de77b9aa-b558-4431-b116-5e1e1cc116f3"). InnerVolumeSpecName "kube-api-access-nj2jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.562255 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de77b9aa-b558-4431-b116-5e1e1cc116f3" (UID: "de77b9aa-b558-4431-b116-5e1e1cc116f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.619498 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:47:22 crc kubenswrapper[4873]: E0219 09:47:22.619815 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de77b9aa-b558-4431-b116-5e1e1cc116f3" containerName="collect-profiles" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.619835 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="de77b9aa-b558-4431-b116-5e1e1cc116f3" containerName="collect-profiles" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.619941 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="de77b9aa-b558-4431-b116-5e1e1cc116f3" containerName="collect-profiles" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.620469 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.625446 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.625491 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.625608 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.653590 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.653639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.653704 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj2jk\" (UniqueName: \"kubernetes.io/projected/de77b9aa-b558-4431-b116-5e1e1cc116f3-kube-api-access-nj2jk\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.653716 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de77b9aa-b558-4431-b116-5e1e1cc116f3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.723371 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.728779 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.733875 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.734299 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.740860 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.754126 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.754168 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.754215 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.754253 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.754371 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.781483 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.855380 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.855464 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.855530 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.892848 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.931889 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.940251 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gbzll" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.944307 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.944357 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.946651 4873 patch_prober.go:28] interesting pod/console-f9d7485db-shnwj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.946729 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-shnwj" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 09:47:22 crc kubenswrapper[4873]: I0219 09:47:22.960182 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.050973 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.067428 4873 generic.go:334] "Generic (PLEG): container finished" podID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerID="75a546ab60f91886bf73906724d9833647cf46b858664cd39c852a73088064e8" exitCode=0 Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.067515 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerDied","Data":"75a546ab60f91886bf73906724d9833647cf46b858664cd39c852a73088064e8"} Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.067543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerStarted","Data":"81340de24ca383dbb41a0340acf197019d868e5563832a3950dc50b33c15f087"} Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.133401 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" event={"ID":"de77b9aa-b558-4431-b116-5e1e1cc116f3","Type":"ContainerDied","Data":"a018522e013b75a19d6f1ebe089ac24d73537d03912cdb27eb2e286e6cfe33f1"} Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.133437 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a018522e013b75a19d6f1ebe089ac24d73537d03912cdb27eb2e286e6cfe33f1" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.133474 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.147694 4873 generic.go:334] "Generic (PLEG): container finished" podID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerID="9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e" exitCode=0 Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.147996 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerDied","Data":"9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e"} Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.148116 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerStarted","Data":"5bca84b1a6668c5e7d3c16b7d1810bc8d1542096d34580cd77564b1a69e0e7cc"} Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.294929 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.295237 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.295681 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.295698 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.339055 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.341979 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:23 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:23 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:23 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.342029 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.591839 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.732621 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:47:23 crc kubenswrapper[4873]: I0219 09:47:23.770814 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:47:23 crc kubenswrapper[4873]: W0219 09:47:23.809465 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda74b91a5_c78a_4bd3_92d5_1fd2b7237aca.slice/crio-a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1 WatchSource:0}: Error finding container a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1: Status 404 returned error can't find the container with id a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1 Feb 19 09:47:24 crc kubenswrapper[4873]: I0219 09:47:24.173583 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee","Type":"ContainerStarted","Data":"1c55529d8e541c248142ab8026e5850fd051ee5e23de0e342f36d4de091fd688"} Feb 19 09:47:24 crc kubenswrapper[4873]: I0219 09:47:24.179690 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca","Type":"ContainerStarted","Data":"a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1"} Feb 19 09:47:24 crc kubenswrapper[4873]: I0219 09:47:24.341847 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:24 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:24 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:24 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:24 crc kubenswrapper[4873]: I0219 09:47:24.341907 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.192280 4873 generic.go:334] "Generic (PLEG): container finished" podID="d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" containerID="312932e6b0041321c21e003869fd651d5204223b0419fc47ab0b3813ba249bdc" exitCode=0 Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.192527 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee","Type":"ContainerDied","Data":"312932e6b0041321c21e003869fd651d5204223b0419fc47ab0b3813ba249bdc"} Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.217312 4873 generic.go:334] "Generic (PLEG): container finished" podID="a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" containerID="0979056b50e27f65b536fb51d74762974f9a878992d013ad8c3a0ca1ed6ca214" exitCode=0 Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.217361 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca","Type":"ContainerDied","Data":"0979056b50e27f65b536fb51d74762974f9a878992d013ad8c3a0ca1ed6ca214"} Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.342433 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:25 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:25 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:25 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:25 crc kubenswrapper[4873]: I0219 09:47:25.342501 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.124943 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.342547 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:26 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:26 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:26 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.342599 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.779161 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.795288 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.955896 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access\") pod \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.955988 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir\") pod \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\" (UID: \"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca\") " Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.956011 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir\") pod \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.956047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access\") pod \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\" (UID: \"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee\") " Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.960771 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" (UID: "a74b91a5-c78a-4bd3-92d5-1fd2b7237aca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.960862 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" (UID: "d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.982729 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" (UID: "d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:26 crc kubenswrapper[4873]: I0219 09:47:26.982793 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" (UID: "a74b91a5-c78a-4bd3-92d5-1fd2b7237aca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.058276 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.058312 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.058320 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.058330 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a74b91a5-c78a-4bd3-92d5-1fd2b7237aca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.241688 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a74b91a5-c78a-4bd3-92d5-1fd2b7237aca","Type":"ContainerDied","Data":"a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1"} Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.241726 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a97cf67854674d49088004bb367b4075c36af7aaa720457ef826861e4ac9c3e1" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.241773 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.247948 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee","Type":"ContainerDied","Data":"1c55529d8e541c248142ab8026e5850fd051ee5e23de0e342f36d4de091fd688"} Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.247987 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c55529d8e541c248142ab8026e5850fd051ee5e23de0e342f36d4de091fd688" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.250342 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.343641 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:27 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:27 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:27 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:27 crc kubenswrapper[4873]: I0219 09:47:27.343703 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:28 crc kubenswrapper[4873]: I0219 09:47:28.341873 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:28 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:28 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:28 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:28 crc kubenswrapper[4873]: I0219 09:47:28.342398 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:28 crc kubenswrapper[4873]: I0219 09:47:28.806625 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mv87q" Feb 19 09:47:29 crc kubenswrapper[4873]: I0219 09:47:29.341072 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:29 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:29 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:29 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:29 crc kubenswrapper[4873]: I0219 09:47:29.341159 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:30 crc kubenswrapper[4873]: I0219 09:47:30.342928 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:30 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:30 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:30 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:30 crc kubenswrapper[4873]: I0219 09:47:30.343280 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:31 crc kubenswrapper[4873]: I0219 09:47:31.341348 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:31 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:31 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:31 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:31 crc kubenswrapper[4873]: I0219 09:47:31.341395 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:32 crc kubenswrapper[4873]: I0219 09:47:32.343985 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:32 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:32 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:32 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:32 crc kubenswrapper[4873]: I0219 09:47:32.344253 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:32 crc kubenswrapper[4873]: I0219 09:47:32.944369 4873 patch_prober.go:28] interesting pod/console-f9d7485db-shnwj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 09:47:32 crc kubenswrapper[4873]: I0219 09:47:32.944456 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-shnwj" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.294552 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.294612 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.294847 4873 patch_prober.go:28] interesting pod/downloads-7954f5f757-9pq25 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.294930 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9pq25" podUID="e0f6a9a1-70e5-46ce-97aa-3dc9d26c672e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.341704 4873 patch_prober.go:28] interesting pod/router-default-5444994796-kzpbf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:47:33 crc kubenswrapper[4873]: [-]has-synced failed: reason withheld Feb 19 09:47:33 crc kubenswrapper[4873]: [+]process-running ok Feb 19 09:47:33 crc kubenswrapper[4873]: healthz check failed Feb 19 09:47:33 crc kubenswrapper[4873]: I0219 09:47:33.341777 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kzpbf" podUID="34f3caca-1b4c-493d-a10b-277b42d7ce72" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:47:34 crc kubenswrapper[4873]: I0219 09:47:34.342831 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:34 crc kubenswrapper[4873]: I0219 09:47:34.346832 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kzpbf" Feb 19 09:47:39 crc kubenswrapper[4873]: I0219 09:47:39.896648 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:39 crc kubenswrapper[4873]: I0219 09:47:39.919633 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98d35597-056d-48f0-b599-28b098dd45f3-metrics-certs\") pod \"network-metrics-daemon-lcp8k\" (UID: \"98d35597-056d-48f0-b599-28b098dd45f3\") " pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:39 crc kubenswrapper[4873]: I0219 09:47:39.944256 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcp8k" Feb 19 09:47:41 crc kubenswrapper[4873]: I0219 09:47:41.246883 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:47:42 crc kubenswrapper[4873]: I0219 09:47:42.950635 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:42 crc kubenswrapper[4873]: I0219 09:47:42.956166 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:47:43 crc kubenswrapper[4873]: I0219 09:47:43.299057 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9pq25" Feb 19 09:47:48 crc kubenswrapper[4873]: I0219 09:47:48.240585 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:47:48 crc kubenswrapper[4873]: I0219 09:47:48.241179 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:47:51 crc kubenswrapper[4873]: I0219 09:47:51.591867 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.523151 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.523366 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc4qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5fj2x_openshift-marketplace(e52516d8-c410-4dbd-b41f-cbda11425b0e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.524692 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5fj2x" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.621423 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.622026 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2jgk6_openshift-marketplace(e767e90e-5146-4f1e-9f0b-5f5acb185429): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.623194 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2jgk6" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.680769 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.680916 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnz6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tnf24_openshift-marketplace(9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.682090 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tnf24" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.689904 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.690138 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sl6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jm66x_openshift-marketplace(d5d58373-fe5d-4afe-9da1-256843164ff4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.691327 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jm66x" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.807613 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.807770 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zms24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dzcdv_openshift-marketplace(d152d3c6-e3c6-4255-95b5-eafe02557eb9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:47:52 crc kubenswrapper[4873]: E0219 09:47:52.810416 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dzcdv" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" Feb 19 09:47:52 crc kubenswrapper[4873]: I0219 09:47:52.973333 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lcp8k"] Feb 19 09:47:52 crc kubenswrapper[4873]: W0219 09:47:52.978161 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98d35597_056d_48f0_b599_28b098dd45f3.slice/crio-2a9f95e854e8de082c3f0b3167bc0a8f9ba7069fecd049749001c044fa20cb3d WatchSource:0}: Error finding container 2a9f95e854e8de082c3f0b3167bc0a8f9ba7069fecd049749001c044fa20cb3d: Status 404 returned error can't find the container with id 2a9f95e854e8de082c3f0b3167bc0a8f9ba7069fecd049749001c044fa20cb3d Feb 19 09:47:52 crc kubenswrapper[4873]: I0219 09:47:52.992731 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" event={"ID":"98d35597-056d-48f0-b599-28b098dd45f3","Type":"ContainerStarted","Data":"2a9f95e854e8de082c3f0b3167bc0a8f9ba7069fecd049749001c044fa20cb3d"} Feb 19 09:47:52 crc kubenswrapper[4873]: I0219 09:47:52.994730 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerStarted","Data":"dde867d0270d4b3c266ef7d78245dcb4ac5bc44881658ebda50b8400fe4b5ef0"} Feb 19 09:47:52 crc kubenswrapper[4873]: I0219 09:47:52.998500 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerStarted","Data":"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9"} Feb 19 09:47:53 crc kubenswrapper[4873]: I0219 09:47:53.000593 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerStarted","Data":"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2"} Feb 19 09:47:53 crc kubenswrapper[4873]: E0219 09:47:53.001905 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5fj2x" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" Feb 19 09:47:53 crc kubenswrapper[4873]: E0219 09:47:53.004076 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jm66x" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" Feb 19 09:47:53 crc kubenswrapper[4873]: E0219 09:47:53.004123 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2jgk6" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" Feb 19 09:47:53 crc kubenswrapper[4873]: E0219 09:47:53.004159 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tnf24" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" Feb 19 09:47:53 crc kubenswrapper[4873]: E0219 09:47:53.004603 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dzcdv" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" Feb 19 09:47:53 crc kubenswrapper[4873]: I0219 09:47:53.722924 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jjkrt" Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.007359 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" event={"ID":"98d35597-056d-48f0-b599-28b098dd45f3","Type":"ContainerStarted","Data":"eff0e3f7ead72779407a2eece5581ea236489792ab0ad02d23a186abcf664e2c"} Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.008078 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcp8k" event={"ID":"98d35597-056d-48f0-b599-28b098dd45f3","Type":"ContainerStarted","Data":"c66ebb5bcebeb69e28284dccb73ad3ce6d73f6d7358e10e1e7a0c630ec2629d4"} Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.009166 4873 generic.go:334] "Generic (PLEG): container finished" podID="061e8672-31d8-48ec-87fc-158e44af91e4" containerID="dde867d0270d4b3c266ef7d78245dcb4ac5bc44881658ebda50b8400fe4b5ef0" exitCode=0 Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.009230 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerDied","Data":"dde867d0270d4b3c266ef7d78245dcb4ac5bc44881658ebda50b8400fe4b5ef0"} Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.011112 4873 generic.go:334] "Generic (PLEG): container finished" podID="0954690a-09f0-4b1b-be57-db87e9304488" containerID="6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9" exitCode=0 Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.011166 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerDied","Data":"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9"} Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.013713 4873 generic.go:334] "Generic (PLEG): container finished" podID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerID="e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2" exitCode=0 Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.013741 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerDied","Data":"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2"} Feb 19 09:47:54 crc kubenswrapper[4873]: I0219 09:47:54.045012 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lcp8k" podStartSLOduration=157.0449937 podStartE2EDuration="2m37.0449937s" podCreationTimestamp="2026-02-19 09:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:54.028034682 +0000 UTC m=+183.317466320" watchObservedRunningTime="2026-02-19 09:47:54.0449937 +0000 UTC m=+183.334425338" Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.021262 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerStarted","Data":"2dca850c88c97df8745ed0cd2c09857330c9ddf17d65678c2f357f2f1bc1105f"} Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.023418 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerStarted","Data":"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba"} Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.032969 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerStarted","Data":"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0"} Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.077180 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8mch8" podStartSLOduration=3.5553230940000002 podStartE2EDuration="37.077145985s" podCreationTimestamp="2026-02-19 09:47:18 +0000 UTC" firstStartedPulling="2026-02-19 09:47:21.003528114 +0000 UTC m=+150.292959752" lastFinishedPulling="2026-02-19 09:47:54.525350995 +0000 UTC m=+183.814782643" observedRunningTime="2026-02-19 09:47:55.072946574 +0000 UTC m=+184.362378212" watchObservedRunningTime="2026-02-19 09:47:55.077145985 +0000 UTC m=+184.366577623" Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.112891 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hv2j6" podStartSLOduration=2.647876588 podStartE2EDuration="35.112873953s" podCreationTimestamp="2026-02-19 09:47:20 +0000 UTC" firstStartedPulling="2026-02-19 09:47:22.040983553 +0000 UTC m=+151.330415191" lastFinishedPulling="2026-02-19 09:47:54.505980918 +0000 UTC m=+183.795412556" observedRunningTime="2026-02-19 09:47:55.111676118 +0000 UTC m=+184.401107756" watchObservedRunningTime="2026-02-19 09:47:55.112873953 +0000 UTC m=+184.402305591" Feb 19 09:47:55 crc kubenswrapper[4873]: I0219 09:47:55.149419 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gjn8l" podStartSLOduration=2.865984052 podStartE2EDuration="34.149397653s" podCreationTimestamp="2026-02-19 09:47:21 +0000 UTC" firstStartedPulling="2026-02-19 09:47:23.156715332 +0000 UTC m=+152.446146970" lastFinishedPulling="2026-02-19 09:47:54.440128933 +0000 UTC m=+183.729560571" observedRunningTime="2026-02-19 09:47:55.147794837 +0000 UTC m=+184.437226475" watchObservedRunningTime="2026-02-19 09:47:55.149397653 +0000 UTC m=+184.438829281" Feb 19 09:47:59 crc kubenswrapper[4873]: I0219 09:47:59.096898 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:59 crc kubenswrapper[4873]: I0219 09:47:59.098076 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:59 crc kubenswrapper[4873]: I0219 09:47:59.310488 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:47:59 crc kubenswrapper[4873]: I0219 09:47:59.609570 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:48:00 crc kubenswrapper[4873]: I0219 09:48:00.090590 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:48:00 crc kubenswrapper[4873]: I0219 09:48:00.921345 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:00 crc kubenswrapper[4873]: I0219 09:48:00.923150 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:00 crc kubenswrapper[4873]: I0219 09:48:00.962727 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.098005 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.268821 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:48:01 crc kubenswrapper[4873]: E0219 09:48:01.269144 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.269159 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: E0219 09:48:01.269177 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.269186 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.269314 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5411b1f-9d3d-46b7-b1ed-5f8e811f68ee" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.269330 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74b91a5-c78a-4bd3-92d5-1fd2b7237aca" containerName="pruner" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.269765 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.272287 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.272457 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.279867 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.357618 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.357654 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.426768 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.460579 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.460644 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.461195 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.485891 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.637548 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.670995 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.671427 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:01 crc kubenswrapper[4873]: I0219 09:48:01.726342 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:02 crc kubenswrapper[4873]: I0219 09:48:02.067694 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8mch8" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="registry-server" containerID="cri-o://2dca850c88c97df8745ed0cd2c09857330c9ddf17d65678c2f357f2f1bc1105f" gracePeriod=2 Feb 19 09:48:02 crc kubenswrapper[4873]: I0219 09:48:02.068210 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:48:02 crc kubenswrapper[4873]: W0219 09:48:02.074939 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc21b431f_3ddc_4b17_b162_d39ec7981ec3.slice/crio-3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5 WatchSource:0}: Error finding container 3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5: Status 404 returned error can't find the container with id 3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5 Feb 19 09:48:02 crc kubenswrapper[4873]: I0219 09:48:02.129232 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:03 crc kubenswrapper[4873]: I0219 09:48:03.072015 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c21b431f-3ddc-4b17-b162-d39ec7981ec3","Type":"ContainerStarted","Data":"3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5"} Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.080020 4873 generic.go:334] "Generic (PLEG): container finished" podID="061e8672-31d8-48ec-87fc-158e44af91e4" containerID="2dca850c88c97df8745ed0cd2c09857330c9ddf17d65678c2f357f2f1bc1105f" exitCode=0 Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.080151 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerDied","Data":"2dca850c88c97df8745ed0cd2c09857330c9ddf17d65678c2f357f2f1bc1105f"} Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.081390 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c21b431f-3ddc-4b17-b162-d39ec7981ec3","Type":"ContainerStarted","Data":"2daef70c96b51618fa39301e65d6fec67bde37d7da8c4868ef080055f31a9403"} Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.098096 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.098070743 podStartE2EDuration="3.098070743s" podCreationTimestamp="2026-02-19 09:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:04.092881664 +0000 UTC m=+193.382313302" watchObservedRunningTime="2026-02-19 09:48:04.098070743 +0000 UTC m=+193.387502421" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.357286 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.406397 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities\") pod \"061e8672-31d8-48ec-87fc-158e44af91e4\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.406456 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq6xh\" (UniqueName: \"kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh\") pod \"061e8672-31d8-48ec-87fc-158e44af91e4\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.406559 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content\") pod \"061e8672-31d8-48ec-87fc-158e44af91e4\" (UID: \"061e8672-31d8-48ec-87fc-158e44af91e4\") " Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.407283 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities" (OuterVolumeSpecName: "utilities") pod "061e8672-31d8-48ec-87fc-158e44af91e4" (UID: "061e8672-31d8-48ec-87fc-158e44af91e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.411656 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh" (OuterVolumeSpecName: "kube-api-access-kq6xh") pod "061e8672-31d8-48ec-87fc-158e44af91e4" (UID: "061e8672-31d8-48ec-87fc-158e44af91e4"). InnerVolumeSpecName "kube-api-access-kq6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.466936 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "061e8672-31d8-48ec-87fc-158e44af91e4" (UID: "061e8672-31d8-48ec-87fc-158e44af91e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.507484 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.507513 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq6xh\" (UniqueName: \"kubernetes.io/projected/061e8672-31d8-48ec-87fc-158e44af91e4-kube-api-access-kq6xh\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:04 crc kubenswrapper[4873]: I0219 09:48:04.507524 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/061e8672-31d8-48ec-87fc-158e44af91e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.087770 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mch8" event={"ID":"061e8672-31d8-48ec-87fc-158e44af91e4","Type":"ContainerDied","Data":"4f91cfd7b327a1281f11c3570024804d58c90780fc2519b06ceb03c7886c6273"} Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.087835 4873 scope.go:117] "RemoveContainer" containerID="2dca850c88c97df8745ed0cd2c09857330c9ddf17d65678c2f357f2f1bc1105f" Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.087792 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mch8" Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.089934 4873 generic.go:334] "Generic (PLEG): container finished" podID="c21b431f-3ddc-4b17-b162-d39ec7981ec3" containerID="2daef70c96b51618fa39301e65d6fec67bde37d7da8c4868ef080055f31a9403" exitCode=0 Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.089968 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c21b431f-3ddc-4b17-b162-d39ec7981ec3","Type":"ContainerDied","Data":"2daef70c96b51618fa39301e65d6fec67bde37d7da8c4868ef080055f31a9403"} Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.109392 4873 scope.go:117] "RemoveContainer" containerID="dde867d0270d4b3c266ef7d78245dcb4ac5bc44881658ebda50b8400fe4b5ef0" Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.135363 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.138766 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8mch8"] Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.141476 4873 scope.go:117] "RemoveContainer" containerID="0ca6133aff5d54bab32e361a7911c1c5856dd641828cf1fe1361309a8d03164b" Feb 19 09:48:05 crc kubenswrapper[4873]: I0219 09:48:05.490950 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" path="/var/lib/kubelet/pods/061e8672-31d8-48ec-87fc-158e44af91e4/volumes" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.400391 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.428500 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir\") pod \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.428558 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access\") pod \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\" (UID: \"c21b431f-3ddc-4b17-b162-d39ec7981ec3\") " Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.428652 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c21b431f-3ddc-4b17-b162-d39ec7981ec3" (UID: "c21b431f-3ddc-4b17-b162-d39ec7981ec3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.434845 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c21b431f-3ddc-4b17-b162-d39ec7981ec3" (UID: "c21b431f-3ddc-4b17-b162-d39ec7981ec3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.529740 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.529784 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c21b431f-3ddc-4b17-b162-d39ec7981ec3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881400 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:48:06 crc kubenswrapper[4873]: E0219 09:48:06.881750 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="extract-content" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881774 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="extract-content" Feb 19 09:48:06 crc kubenswrapper[4873]: E0219 09:48:06.881798 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b431f-3ddc-4b17-b162-d39ec7981ec3" containerName="pruner" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881806 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b431f-3ddc-4b17-b162-d39ec7981ec3" containerName="pruner" Feb 19 09:48:06 crc kubenswrapper[4873]: E0219 09:48:06.881822 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="extract-utilities" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881831 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="extract-utilities" Feb 19 09:48:06 crc kubenswrapper[4873]: E0219 09:48:06.881843 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="registry-server" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881851 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="registry-server" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881976 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21b431f-3ddc-4b17-b162-d39ec7981ec3" containerName="pruner" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.881990 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="061e8672-31d8-48ec-87fc-158e44af91e4" containerName="registry-server" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.882513 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.886311 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.933826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.933887 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:06 crc kubenswrapper[4873]: I0219 09:48:06.933905 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.035309 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.035388 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.035449 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.035486 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.035538 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.053793 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.110769 4873 generic.go:334] "Generic (PLEG): container finished" podID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerID="3925b9cd7df38893cf6f1abc778ceaaf22660b5582a02ffd58d2352d46ffbced" exitCode=0 Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.110845 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerDied","Data":"3925b9cd7df38893cf6f1abc778ceaaf22660b5582a02ffd58d2352d46ffbced"} Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.114747 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c21b431f-3ddc-4b17-b162-d39ec7981ec3","Type":"ContainerDied","Data":"3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5"} Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.114787 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0e0298ddffb29ceba29d4f153841c4c537720b187ecb029d5a54190b91e7b5" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.114823 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.196453 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:07 crc kubenswrapper[4873]: I0219 09:48:07.622334 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:48:07 crc kubenswrapper[4873]: W0219 09:48:07.633010 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddcd45a6e_fa80_4995_bab8_20796784d618.slice/crio-3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456 WatchSource:0}: Error finding container 3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456: Status 404 returned error can't find the container with id 3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456 Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.121505 4873 generic.go:334] "Generic (PLEG): container finished" podID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerID="1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a" exitCode=0 Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.121604 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerDied","Data":"1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a"} Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.126271 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerStarted","Data":"2b6ac58d0b390cd493f6fbf85d5fe8746c584e84c39ef88c4e2c4eff614d5cf2"} Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.128851 4873 generic.go:334] "Generic (PLEG): container finished" podID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerID="502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8" exitCode=0 Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.128909 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerDied","Data":"502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8"} Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.131990 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcd45a6e-fa80-4995-bab8-20796784d618","Type":"ContainerStarted","Data":"12e07b634f8034e56f9833d14110782d34f2365b31aa7149ce239d933850da51"} Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.132020 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcd45a6e-fa80-4995-bab8-20796784d618","Type":"ContainerStarted","Data":"3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456"} Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.162163 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.162144824 podStartE2EDuration="2.162144824s" podCreationTimestamp="2026-02-19 09:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:08.161277452 +0000 UTC m=+197.450709100" watchObservedRunningTime="2026-02-19 09:48:08.162144824 +0000 UTC m=+197.451576462" Feb 19 09:48:08 crc kubenswrapper[4873]: I0219 09:48:08.203715 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzcdv" podStartSLOduration=2.807075704 podStartE2EDuration="47.2036986s" podCreationTimestamp="2026-02-19 09:47:21 +0000 UTC" firstStartedPulling="2026-02-19 09:47:23.108926718 +0000 UTC m=+152.398358356" lastFinishedPulling="2026-02-19 09:48:07.505549624 +0000 UTC m=+196.794981252" observedRunningTime="2026-02-19 09:48:08.200220362 +0000 UTC m=+197.489652020" watchObservedRunningTime="2026-02-19 09:48:08.2036986 +0000 UTC m=+197.493130238" Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.141810 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerStarted","Data":"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3"} Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.144558 4873 generic.go:334] "Generic (PLEG): container finished" podID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerID="42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c" exitCode=0 Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.144639 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerDied","Data":"42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c"} Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.147140 4873 generic.go:334] "Generic (PLEG): container finished" podID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerID="45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c" exitCode=0 Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.147216 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerDied","Data":"45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c"} Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.149371 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerStarted","Data":"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf"} Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.167320 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5fj2x" podStartSLOduration=3.623359696 podStartE2EDuration="51.167287376s" podCreationTimestamp="2026-02-19 09:47:18 +0000 UTC" firstStartedPulling="2026-02-19 09:47:20.989320456 +0000 UTC m=+150.278752094" lastFinishedPulling="2026-02-19 09:48:08.533248126 +0000 UTC m=+197.822679774" observedRunningTime="2026-02-19 09:48:09.166274791 +0000 UTC m=+198.455706439" watchObservedRunningTime="2026-02-19 09:48:09.167287376 +0000 UTC m=+198.456719034" Feb 19 09:48:09 crc kubenswrapper[4873]: I0219 09:48:09.232660 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnf24" podStartSLOduration=3.733666297 podStartE2EDuration="51.232642302s" podCreationTimestamp="2026-02-19 09:47:18 +0000 UTC" firstStartedPulling="2026-02-19 09:47:21.021089039 +0000 UTC m=+150.310520677" lastFinishedPulling="2026-02-19 09:48:08.520065044 +0000 UTC m=+197.809496682" observedRunningTime="2026-02-19 09:48:09.232037477 +0000 UTC m=+198.521469155" watchObservedRunningTime="2026-02-19 09:48:09.232642302 +0000 UTC m=+198.522073940" Feb 19 09:48:10 crc kubenswrapper[4873]: I0219 09:48:10.159030 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerStarted","Data":"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a"} Feb 19 09:48:10 crc kubenswrapper[4873]: I0219 09:48:10.160722 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerStarted","Data":"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b"} Feb 19 09:48:10 crc kubenswrapper[4873]: I0219 09:48:10.184911 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jgk6" podStartSLOduration=2.644960723 podStartE2EDuration="50.184896554s" podCreationTimestamp="2026-02-19 09:47:20 +0000 UTC" firstStartedPulling="2026-02-19 09:47:22.034201188 +0000 UTC m=+151.323632826" lastFinishedPulling="2026-02-19 09:48:09.574137019 +0000 UTC m=+198.863568657" observedRunningTime="2026-02-19 09:48:10.184880534 +0000 UTC m=+199.474312162" watchObservedRunningTime="2026-02-19 09:48:10.184896554 +0000 UTC m=+199.474328192" Feb 19 09:48:10 crc kubenswrapper[4873]: I0219 09:48:10.200703 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jm66x" podStartSLOduration=3.626930752 podStartE2EDuration="53.200683752s" podCreationTimestamp="2026-02-19 09:47:17 +0000 UTC" firstStartedPulling="2026-02-19 09:47:19.97421393 +0000 UTC m=+149.263645568" lastFinishedPulling="2026-02-19 09:48:09.54796693 +0000 UTC m=+198.837398568" observedRunningTime="2026-02-19 09:48:10.198503837 +0000 UTC m=+199.487935505" watchObservedRunningTime="2026-02-19 09:48:10.200683752 +0000 UTC m=+199.490115410" Feb 19 09:48:11 crc kubenswrapper[4873]: I0219 09:48:11.228118 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:11 crc kubenswrapper[4873]: I0219 09:48:11.228170 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:11 crc kubenswrapper[4873]: I0219 09:48:11.869725 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:11 crc kubenswrapper[4873]: I0219 09:48:11.869789 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:12 crc kubenswrapper[4873]: I0219 09:48:12.277613 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2jgk6" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="registry-server" probeResult="failure" output=< Feb 19 09:48:12 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 09:48:12 crc kubenswrapper[4873]: > Feb 19 09:48:12 crc kubenswrapper[4873]: I0219 09:48:12.907884 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzcdv" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="registry-server" probeResult="failure" output=< Feb 19 09:48:12 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 09:48:12 crc kubenswrapper[4873]: > Feb 19 09:48:16 crc kubenswrapper[4873]: I0219 09:48:16.616539 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerName="oauth-openshift" containerID="cri-o://97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757" gracePeriod=15 Feb 19 09:48:16 crc kubenswrapper[4873]: I0219 09:48:16.995995 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.029689 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-7q58w"] Feb 19 09:48:17 crc kubenswrapper[4873]: E0219 09:48:17.029923 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerName="oauth-openshift" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.029938 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerName="oauth-openshift" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.030058 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerName="oauth-openshift" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.030509 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.046191 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-7q58w"] Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163219 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxsx\" (UniqueName: \"kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163250 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163282 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163332 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163358 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163379 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163398 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.163553 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164124 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164249 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164288 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164344 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164372 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164398 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164439 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164468 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error\") pod \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\" (UID: \"9324aa8b-fbce-42bb-b339-0aa2e382efd4\") " Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164631 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164663 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164672 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164684 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-dir\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164720 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164747 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164782 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-policies\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164804 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164880 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164904 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164949 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164969 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.164991 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165037 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165072 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz26\" (UniqueName: \"kubernetes.io/projected/82e487b8-1a99-4f7a-902a-049dcbaa2715-kube-api-access-2pz26\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165159 4873 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165174 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165188 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.165200 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.166754 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.174467 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.174851 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.175705 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx" (OuterVolumeSpecName: "kube-api-access-fpxsx") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "kube-api-access-fpxsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.176521 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.179998 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.180211 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.182324 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.182548 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.183019 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9324aa8b-fbce-42bb-b339-0aa2e382efd4" (UID: "9324aa8b-fbce-42bb-b339-0aa2e382efd4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.197948 4873 generic.go:334] "Generic (PLEG): container finished" podID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" containerID="97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757" exitCode=0 Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.197994 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" event={"ID":"9324aa8b-fbce-42bb-b339-0aa2e382efd4","Type":"ContainerDied","Data":"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757"} Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.198015 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.198032 4873 scope.go:117] "RemoveContainer" containerID="97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.198021 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g545" event={"ID":"9324aa8b-fbce-42bb-b339-0aa2e382efd4","Type":"ContainerDied","Data":"b4cafb3addf61abe3b1441fa50a8321f11c79cf993ea43c1a09c9c8ca90fbdfc"} Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.229903 4873 scope.go:117] "RemoveContainer" containerID="97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757" Feb 19 09:48:17 crc kubenswrapper[4873]: E0219 09:48:17.230529 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757\": container with ID starting with 97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757 not found: ID does not exist" containerID="97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.230579 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757"} err="failed to get container status \"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757\": rpc error: code = NotFound desc = could not find container \"97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757\": container with ID starting with 97275dc87160b7aaab21e362565d26836620eb22c875e626afdba41f45da3757 not found: ID does not exist" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.234036 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.238540 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g545"] Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266081 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266158 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266184 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-policies\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266207 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266242 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266262 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266280 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266296 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266341 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266362 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz26\" (UniqueName: \"kubernetes.io/projected/82e487b8-1a99-4f7a-902a-049dcbaa2715-kube-api-access-2pz26\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266395 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266410 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-dir\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266464 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266475 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266485 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266496 4873 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9324aa8b-fbce-42bb-b339-0aa2e382efd4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266504 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266517 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266528 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266539 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpxsx\" (UniqueName: \"kubernetes.io/projected/9324aa8b-fbce-42bb-b339-0aa2e382efd4-kube-api-access-fpxsx\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266550 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266560 4873 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9324aa8b-fbce-42bb-b339-0aa2e382efd4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.266601 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-dir\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.267257 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.267632 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-audit-policies\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.267938 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.268578 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.271599 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.271628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.271638 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.272051 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.272184 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.272230 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.272388 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.274268 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82e487b8-1a99-4f7a-902a-049dcbaa2715-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.285568 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz26\" (UniqueName: \"kubernetes.io/projected/82e487b8-1a99-4f7a-902a-049dcbaa2715-kube-api-access-2pz26\") pod \"oauth-openshift-9565f95f5-7q58w\" (UID: \"82e487b8-1a99-4f7a-902a-049dcbaa2715\") " pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.345348 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.493681 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9324aa8b-fbce-42bb-b339-0aa2e382efd4" path="/var/lib/kubelet/pods/9324aa8b-fbce-42bb-b339-0aa2e382efd4/volumes" Feb 19 09:48:17 crc kubenswrapper[4873]: I0219 09:48:17.761503 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-7q58w"] Feb 19 09:48:17 crc kubenswrapper[4873]: W0219 09:48:17.782542 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82e487b8_1a99_4f7a_902a_049dcbaa2715.slice/crio-7c15b3027e2e58501d78526d48ad77fd4f3d49eb9619d286717e8cb4bb237de7 WatchSource:0}: Error finding container 7c15b3027e2e58501d78526d48ad77fd4f3d49eb9619d286717e8cb4bb237de7: Status 404 returned error can't find the container with id 7c15b3027e2e58501d78526d48ad77fd4f3d49eb9619d286717e8cb4bb237de7 Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.214322 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" event={"ID":"82e487b8-1a99-4f7a-902a-049dcbaa2715","Type":"ContainerStarted","Data":"7c15b3027e2e58501d78526d48ad77fd4f3d49eb9619d286717e8cb4bb237de7"} Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.241424 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.241502 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.241594 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.242462 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.242565 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837" gracePeriod=600 Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.437396 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.437455 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.516211 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.518934 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.527825 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.575090 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.931753 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.932165 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:18 crc kubenswrapper[4873]: I0219 09:48:18.991822 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.221870 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837" exitCode=0 Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.221958 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837"} Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.222016 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a"} Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.223463 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" event={"ID":"82e487b8-1a99-4f7a-902a-049dcbaa2715","Type":"ContainerStarted","Data":"8241b3ce1c6b5b4f0d12db7396350203e280fb8c3078015365e0b62e853f2b61"} Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.265697 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" podStartSLOduration=28.265673935 podStartE2EDuration="28.265673935s" podCreationTimestamp="2026-02-19 09:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:19.263341116 +0000 UTC m=+208.552772774" watchObservedRunningTime="2026-02-19 09:48:19.265673935 +0000 UTC m=+208.555105593" Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.271909 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.273132 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:19 crc kubenswrapper[4873]: I0219 09:48:19.278232 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:20 crc kubenswrapper[4873]: I0219 09:48:20.229829 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:20 crc kubenswrapper[4873]: I0219 09:48:20.236055 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9565f95f5-7q58w" Feb 19 09:48:21 crc kubenswrapper[4873]: I0219 09:48:21.231823 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:48:21 crc kubenswrapper[4873]: I0219 09:48:21.299786 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:21 crc kubenswrapper[4873]: I0219 09:48:21.357116 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:21 crc kubenswrapper[4873]: I0219 09:48:21.921235 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:21 crc kubenswrapper[4873]: I0219 09:48:21.972332 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.244259 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5fj2x" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="registry-server" containerID="cri-o://c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3" gracePeriod=2 Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.708052 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.874607 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities\") pod \"e52516d8-c410-4dbd-b41f-cbda11425b0e\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.874645 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc4qh\" (UniqueName: \"kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh\") pod \"e52516d8-c410-4dbd-b41f-cbda11425b0e\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.874675 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content\") pod \"e52516d8-c410-4dbd-b41f-cbda11425b0e\" (UID: \"e52516d8-c410-4dbd-b41f-cbda11425b0e\") " Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.875492 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities" (OuterVolumeSpecName: "utilities") pod "e52516d8-c410-4dbd-b41f-cbda11425b0e" (UID: "e52516d8-c410-4dbd-b41f-cbda11425b0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.879680 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh" (OuterVolumeSpecName: "kube-api-access-hc4qh") pod "e52516d8-c410-4dbd-b41f-cbda11425b0e" (UID: "e52516d8-c410-4dbd-b41f-cbda11425b0e"). InnerVolumeSpecName "kube-api-access-hc4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.942678 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e52516d8-c410-4dbd-b41f-cbda11425b0e" (UID: "e52516d8-c410-4dbd-b41f-cbda11425b0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.977174 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.977234 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc4qh\" (UniqueName: \"kubernetes.io/projected/e52516d8-c410-4dbd-b41f-cbda11425b0e-kube-api-access-hc4qh\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:22 crc kubenswrapper[4873]: I0219 09:48:22.977256 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52516d8-c410-4dbd-b41f-cbda11425b0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.251318 4873 generic.go:334] "Generic (PLEG): container finished" podID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerID="c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3" exitCode=0 Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.251370 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fj2x" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.251391 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerDied","Data":"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3"} Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.251911 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fj2x" event={"ID":"e52516d8-c410-4dbd-b41f-cbda11425b0e","Type":"ContainerDied","Data":"3d229a1d7483ee232f5190406e28ea1aa38e3259959252fbb620deb657e8a447"} Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.251957 4873 scope.go:117] "RemoveContainer" containerID="c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.281023 4873 scope.go:117] "RemoveContainer" containerID="502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.282738 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.288336 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5fj2x"] Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.319808 4873 scope.go:117] "RemoveContainer" containerID="359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.346049 4873 scope.go:117] "RemoveContainer" containerID="c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3" Feb 19 09:48:23 crc kubenswrapper[4873]: E0219 09:48:23.346529 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3\": container with ID starting with c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3 not found: ID does not exist" containerID="c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.346567 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3"} err="failed to get container status \"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3\": rpc error: code = NotFound desc = could not find container \"c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3\": container with ID starting with c5a7b8fed896e37da1461b5e8c5138425ba8fee5d6a0a01037ce006e2bb0e0d3 not found: ID does not exist" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.346592 4873 scope.go:117] "RemoveContainer" containerID="502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8" Feb 19 09:48:23 crc kubenswrapper[4873]: E0219 09:48:23.347002 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8\": container with ID starting with 502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8 not found: ID does not exist" containerID="502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.347029 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8"} err="failed to get container status \"502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8\": rpc error: code = NotFound desc = could not find container \"502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8\": container with ID starting with 502bbe7da078201a1c59434c07360963e8b5efd274f7d66baccf688c7b0233a8 not found: ID does not exist" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.347046 4873 scope.go:117] "RemoveContainer" containerID="359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda" Feb 19 09:48:23 crc kubenswrapper[4873]: E0219 09:48:23.349548 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda\": container with ID starting with 359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda not found: ID does not exist" containerID="359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.349591 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda"} err="failed to get container status \"359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda\": rpc error: code = NotFound desc = could not find container \"359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda\": container with ID starting with 359abec79bd5f24ab125135475868fe55a53d89f41fe8e7574eec6dce9ec9eda not found: ID does not exist" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.489832 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" path="/var/lib/kubelet/pods/e52516d8-c410-4dbd-b41f-cbda11425b0e/volumes" Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.632466 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:48:23 crc kubenswrapper[4873]: I0219 09:48:23.632797 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jgk6" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="registry-server" containerID="cri-o://ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a" gracePeriod=2 Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.071335 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.191174 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm9f6\" (UniqueName: \"kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6\") pod \"e767e90e-5146-4f1e-9f0b-5f5acb185429\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.191302 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities\") pod \"e767e90e-5146-4f1e-9f0b-5f5acb185429\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.191349 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content\") pod \"e767e90e-5146-4f1e-9f0b-5f5acb185429\" (UID: \"e767e90e-5146-4f1e-9f0b-5f5acb185429\") " Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.192432 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities" (OuterVolumeSpecName: "utilities") pod "e767e90e-5146-4f1e-9f0b-5f5acb185429" (UID: "e767e90e-5146-4f1e-9f0b-5f5acb185429"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.196308 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6" (OuterVolumeSpecName: "kube-api-access-vm9f6") pod "e767e90e-5146-4f1e-9f0b-5f5acb185429" (UID: "e767e90e-5146-4f1e-9f0b-5f5acb185429"). InnerVolumeSpecName "kube-api-access-vm9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.220948 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e767e90e-5146-4f1e-9f0b-5f5acb185429" (UID: "e767e90e-5146-4f1e-9f0b-5f5acb185429"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.260187 4873 generic.go:334] "Generic (PLEG): container finished" podID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerID="ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a" exitCode=0 Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.260273 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jgk6" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.260232 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerDied","Data":"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a"} Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.260324 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jgk6" event={"ID":"e767e90e-5146-4f1e-9f0b-5f5acb185429","Type":"ContainerDied","Data":"2314d422015de89852637bb24195891fe7f3f4631802a8cc5426a4f84f3df229"} Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.260346 4873 scope.go:117] "RemoveContainer" containerID="ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.281498 4873 scope.go:117] "RemoveContainer" containerID="42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.292931 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm9f6\" (UniqueName: \"kubernetes.io/projected/e767e90e-5146-4f1e-9f0b-5f5acb185429-kube-api-access-vm9f6\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.292966 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.292981 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e767e90e-5146-4f1e-9f0b-5f5acb185429-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.306642 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.309544 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jgk6"] Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.310887 4873 scope.go:117] "RemoveContainer" containerID="3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.332418 4873 scope.go:117] "RemoveContainer" containerID="ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a" Feb 19 09:48:24 crc kubenswrapper[4873]: E0219 09:48:24.332913 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a\": container with ID starting with ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a not found: ID does not exist" containerID="ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.332939 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a"} err="failed to get container status \"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a\": rpc error: code = NotFound desc = could not find container \"ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a\": container with ID starting with ed752748d14b1d34c7402f5f46433b310ca9c0b693dca983a096ae322ca1a06a not found: ID does not exist" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.332958 4873 scope.go:117] "RemoveContainer" containerID="42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c" Feb 19 09:48:24 crc kubenswrapper[4873]: E0219 09:48:24.333242 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c\": container with ID starting with 42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c not found: ID does not exist" containerID="42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.333395 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c"} err="failed to get container status \"42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c\": rpc error: code = NotFound desc = could not find container \"42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c\": container with ID starting with 42afdb85925fa3ea042d23acf3f719be18000b496cfd0c75215046ba3925179c not found: ID does not exist" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.333481 4873 scope.go:117] "RemoveContainer" containerID="3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6" Feb 19 09:48:24 crc kubenswrapper[4873]: E0219 09:48:24.333876 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6\": container with ID starting with 3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6 not found: ID does not exist" containerID="3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6" Feb 19 09:48:24 crc kubenswrapper[4873]: I0219 09:48:24.333926 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6"} err="failed to get container status \"3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6\": rpc error: code = NotFound desc = could not find container \"3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6\": container with ID starting with 3d19858345e60805d3d74034c65385dcd27dffb04874a42aed209e62e4d2c8a6 not found: ID does not exist" Feb 19 09:48:25 crc kubenswrapper[4873]: I0219 09:48:25.498682 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" path="/var/lib/kubelet/pods/e767e90e-5146-4f1e-9f0b-5f5acb185429/volumes" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.027051 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.027348 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzcdv" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="registry-server" containerID="cri-o://2b6ac58d0b390cd493f6fbf85d5fe8746c584e84c39ef88c4e2c4eff614d5cf2" gracePeriod=2 Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.278449 4873 generic.go:334] "Generic (PLEG): container finished" podID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerID="2b6ac58d0b390cd493f6fbf85d5fe8746c584e84c39ef88c4e2c4eff614d5cf2" exitCode=0 Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.278505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerDied","Data":"2b6ac58d0b390cd493f6fbf85d5fe8746c584e84c39ef88c4e2c4eff614d5cf2"} Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.504677 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.620673 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content\") pod \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.620811 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zms24\" (UniqueName: \"kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24\") pod \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.620892 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities\") pod \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\" (UID: \"d152d3c6-e3c6-4255-95b5-eafe02557eb9\") " Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.622298 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities" (OuterVolumeSpecName: "utilities") pod "d152d3c6-e3c6-4255-95b5-eafe02557eb9" (UID: "d152d3c6-e3c6-4255-95b5-eafe02557eb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.629639 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24" (OuterVolumeSpecName: "kube-api-access-zms24") pod "d152d3c6-e3c6-4255-95b5-eafe02557eb9" (UID: "d152d3c6-e3c6-4255-95b5-eafe02557eb9"). InnerVolumeSpecName "kube-api-access-zms24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.722957 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zms24\" (UniqueName: \"kubernetes.io/projected/d152d3c6-e3c6-4255-95b5-eafe02557eb9-kube-api-access-zms24\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.723011 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.773085 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d152d3c6-e3c6-4255-95b5-eafe02557eb9" (UID: "d152d3c6-e3c6-4255-95b5-eafe02557eb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:26 crc kubenswrapper[4873]: I0219 09:48:26.824556 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d152d3c6-e3c6-4255-95b5-eafe02557eb9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.286325 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzcdv" event={"ID":"d152d3c6-e3c6-4255-95b5-eafe02557eb9","Type":"ContainerDied","Data":"81340de24ca383dbb41a0340acf197019d868e5563832a3950dc50b33c15f087"} Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.286395 4873 scope.go:117] "RemoveContainer" containerID="2b6ac58d0b390cd493f6fbf85d5fe8746c584e84c39ef88c4e2c4eff614d5cf2" Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.286427 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzcdv" Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.312244 4873 scope.go:117] "RemoveContainer" containerID="3925b9cd7df38893cf6f1abc778ceaaf22660b5582a02ffd58d2352d46ffbced" Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.331092 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.334454 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzcdv"] Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.348582 4873 scope.go:117] "RemoveContainer" containerID="75a546ab60f91886bf73906724d9833647cf46b858664cd39c852a73088064e8" Feb 19 09:48:27 crc kubenswrapper[4873]: I0219 09:48:27.493416 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" path="/var/lib/kubelet/pods/d152d3c6-e3c6-4255-95b5-eafe02557eb9/volumes" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.413858 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.414705 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnf24" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="registry-server" containerID="cri-o://65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf" gracePeriod=30 Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.440323 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.440675 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jm66x" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="registry-server" containerID="cri-o://f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b" gracePeriod=30 Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.449732 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.450075 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" containerID="cri-o://1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026" gracePeriod=30 Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.454055 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.454363 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hv2j6" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="registry-server" containerID="cri-o://74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba" gracePeriod=30 Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.454846 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jt9rj"] Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455077 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455113 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455128 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455138 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455149 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455157 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455172 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455180 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455190 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455198 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="extract-content" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455209 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455217 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455231 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455240 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455259 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455267 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.455279 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455287 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="extract-utilities" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455399 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e767e90e-5146-4f1e-9f0b-5f5acb185429" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455414 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d152d3c6-e3c6-4255-95b5-eafe02557eb9" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455427 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52516d8-c410-4dbd-b41f-cbda11425b0e" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.455855 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.457865 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.458454 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gjn8l" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="registry-server" containerID="cri-o://f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" gracePeriod=30 Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.462933 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jt9rj"] Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.636685 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.636854 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqs4\" (UniqueName: \"kubernetes.io/projected/1d58439b-31c6-44df-a32d-48f0fcb6a361-kube-api-access-2vqs4\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.637037 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.671723 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 is running failed: container process not found" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.672156 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 is running failed: container process not found" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.672515 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 is running failed: container process not found" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:41 crc kubenswrapper[4873]: E0219 09:48:41.672543 4873 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-gjn8l" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="registry-server" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.738196 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqs4\" (UniqueName: \"kubernetes.io/projected/1d58439b-31c6-44df-a32d-48f0fcb6a361-kube-api-access-2vqs4\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.738285 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.738325 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.739894 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.747050 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d58439b-31c6-44df-a32d-48f0fcb6a361-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.762464 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqs4\" (UniqueName: \"kubernetes.io/projected/1d58439b-31c6-44df-a32d-48f0fcb6a361-kube-api-access-2vqs4\") pod \"marketplace-operator-79b997595-jt9rj\" (UID: \"1d58439b-31c6-44df-a32d-48f0fcb6a361\") " pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.881799 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.889762 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.891071 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.901030 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:41 crc kubenswrapper[4873]: I0219 09:48:41.907224 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041669 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66n7\" (UniqueName: \"kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7\") pod \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041724 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics\") pod \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041749 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities\") pod \"7423538a-949c-4995-bcf8-f2b6a2f8d914\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041772 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities\") pod \"0954690a-09f0-4b1b-be57-db87e9304488\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041792 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca\") pod \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\" (UID: \"f905b5ea-71df-4b1c-997c-d68766bcfcfe\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041824 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content\") pod \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041842 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content\") pod \"0954690a-09f0-4b1b-be57-db87e9304488\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041868 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4gtj\" (UniqueName: \"kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj\") pod \"0954690a-09f0-4b1b-be57-db87e9304488\" (UID: \"0954690a-09f0-4b1b-be57-db87e9304488\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041897 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnz6w\" (UniqueName: \"kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w\") pod \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041924 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gph4z\" (UniqueName: \"kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z\") pod \"7423538a-949c-4995-bcf8-f2b6a2f8d914\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.041970 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content\") pod \"7423538a-949c-4995-bcf8-f2b6a2f8d914\" (UID: \"7423538a-949c-4995-bcf8-f2b6a2f8d914\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.042007 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities\") pod \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\" (UID: \"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.045876 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f905b5ea-71df-4b1c-997c-d68766bcfcfe" (UID: "f905b5ea-71df-4b1c-997c-d68766bcfcfe"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.046228 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities" (OuterVolumeSpecName: "utilities") pod "0954690a-09f0-4b1b-be57-db87e9304488" (UID: "0954690a-09f0-4b1b-be57-db87e9304488"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.046496 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj" (OuterVolumeSpecName: "kube-api-access-f4gtj") pod "0954690a-09f0-4b1b-be57-db87e9304488" (UID: "0954690a-09f0-4b1b-be57-db87e9304488"). InnerVolumeSpecName "kube-api-access-f4gtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.046533 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w" (OuterVolumeSpecName: "kube-api-access-vnz6w") pod "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" (UID: "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3"). InnerVolumeSpecName "kube-api-access-vnz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.046799 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7" (OuterVolumeSpecName: "kube-api-access-t66n7") pod "f905b5ea-71df-4b1c-997c-d68766bcfcfe" (UID: "f905b5ea-71df-4b1c-997c-d68766bcfcfe"). InnerVolumeSpecName "kube-api-access-t66n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.046938 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z" (OuterVolumeSpecName: "kube-api-access-gph4z") pod "7423538a-949c-4995-bcf8-f2b6a2f8d914" (UID: "7423538a-949c-4995-bcf8-f2b6a2f8d914"). InnerVolumeSpecName "kube-api-access-gph4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.048983 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f905b5ea-71df-4b1c-997c-d68766bcfcfe" (UID: "f905b5ea-71df-4b1c-997c-d68766bcfcfe"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.050420 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities" (OuterVolumeSpecName: "utilities") pod "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" (UID: "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.052914 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities" (OuterVolumeSpecName: "utilities") pod "7423538a-949c-4995-bcf8-f2b6a2f8d914" (UID: "7423538a-949c-4995-bcf8-f2b6a2f8d914"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.091958 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0954690a-09f0-4b1b-be57-db87e9304488" (UID: "0954690a-09f0-4b1b-be57-db87e9304488"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.119607 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" (UID: "9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.134835 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144455 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144477 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144488 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144496 4873 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f905b5ea-71df-4b1c-997c-d68766bcfcfe-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144505 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144513 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0954690a-09f0-4b1b-be57-db87e9304488-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144520 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4gtj\" (UniqueName: \"kubernetes.io/projected/0954690a-09f0-4b1b-be57-db87e9304488-kube-api-access-f4gtj\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144530 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnz6w\" (UniqueName: \"kubernetes.io/projected/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-kube-api-access-vnz6w\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144540 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gph4z\" (UniqueName: \"kubernetes.io/projected/7423538a-949c-4995-bcf8-f2b6a2f8d914-kube-api-access-gph4z\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144548 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.144556 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t66n7\" (UniqueName: \"kubernetes.io/projected/f905b5ea-71df-4b1c-997c-d68766bcfcfe-kube-api-access-t66n7\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.206775 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7423538a-949c-4995-bcf8-f2b6a2f8d914" (UID: "7423538a-949c-4995-bcf8-f2b6a2f8d914"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.245898 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content\") pod \"d5d58373-fe5d-4afe-9da1-256843164ff4\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.246017 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities\") pod \"d5d58373-fe5d-4afe-9da1-256843164ff4\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.246062 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sl6f\" (UniqueName: \"kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f\") pod \"d5d58373-fe5d-4afe-9da1-256843164ff4\" (UID: \"d5d58373-fe5d-4afe-9da1-256843164ff4\") " Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.246280 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7423538a-949c-4995-bcf8-f2b6a2f8d914-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.246984 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities" (OuterVolumeSpecName: "utilities") pod "d5d58373-fe5d-4afe-9da1-256843164ff4" (UID: "d5d58373-fe5d-4afe-9da1-256843164ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.250262 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f" (OuterVolumeSpecName: "kube-api-access-7sl6f") pod "d5d58373-fe5d-4afe-9da1-256843164ff4" (UID: "d5d58373-fe5d-4afe-9da1-256843164ff4"). InnerVolumeSpecName "kube-api-access-7sl6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.305518 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5d58373-fe5d-4afe-9da1-256843164ff4" (UID: "d5d58373-fe5d-4afe-9da1-256843164ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.349013 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.349405 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d58373-fe5d-4afe-9da1-256843164ff4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.349426 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sl6f\" (UniqueName: \"kubernetes.io/projected/d5d58373-fe5d-4afe-9da1-256843164ff4-kube-api-access-7sl6f\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.350783 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jt9rj"] Feb 19 09:48:42 crc kubenswrapper[4873]: W0219 09:48:42.360271 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d58439b_31c6_44df_a32d_48f0fcb6a361.slice/crio-0986493cd56ce3102e31a035df4a0cedabeb9d20f536849198783f901982b1e6 WatchSource:0}: Error finding container 0986493cd56ce3102e31a035df4a0cedabeb9d20f536849198783f901982b1e6: Status 404 returned error can't find the container with id 0986493cd56ce3102e31a035df4a0cedabeb9d20f536849198783f901982b1e6 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.367316 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" event={"ID":"1d58439b-31c6-44df-a32d-48f0fcb6a361","Type":"ContainerStarted","Data":"0986493cd56ce3102e31a035df4a0cedabeb9d20f536849198783f901982b1e6"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.369730 4873 generic.go:334] "Generic (PLEG): container finished" podID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerID="f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.369821 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerDied","Data":"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.369879 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jm66x" event={"ID":"d5d58373-fe5d-4afe-9da1-256843164ff4","Type":"ContainerDied","Data":"81841dde96f0bd4c162af34dbaad80f9410851d99d1c9f71453d6997e197a658"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.369920 4873 scope.go:117] "RemoveContainer" containerID="f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.370066 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jm66x" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.373309 4873 generic.go:334] "Generic (PLEG): container finished" podID="0954690a-09f0-4b1b-be57-db87e9304488" containerID="74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.373382 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hv2j6" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.373399 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerDied","Data":"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.373446 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hv2j6" event={"ID":"0954690a-09f0-4b1b-be57-db87e9304488","Type":"ContainerDied","Data":"edab2539f7fc8755b323d06c9cc87b6333d411f7bbacd04da485c28f244826a3"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.375568 4873 generic.go:334] "Generic (PLEG): container finished" podID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerID="1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.375669 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.375686 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" event={"ID":"f905b5ea-71df-4b1c-997c-d68766bcfcfe","Type":"ContainerDied","Data":"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.375868 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86hhq" event={"ID":"f905b5ea-71df-4b1c-997c-d68766bcfcfe","Type":"ContainerDied","Data":"019acffae30ee36980fd8260d8a8299738a95c80eb08007a6c0478560261a038"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.378836 4873 generic.go:334] "Generic (PLEG): container finished" podID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.378955 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerDied","Data":"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.379034 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjn8l" event={"ID":"7423538a-949c-4995-bcf8-f2b6a2f8d914","Type":"ContainerDied","Data":"5bca84b1a6668c5e7d3c16b7d1810bc8d1542096d34580cd77564b1a69e0e7cc"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.379307 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjn8l" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.383274 4873 generic.go:334] "Generic (PLEG): container finished" podID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerID="65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf" exitCode=0 Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.383316 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerDied","Data":"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.383341 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnf24" event={"ID":"9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3","Type":"ContainerDied","Data":"49ebe6c3ea35eaecd163d7a7c155a22151d195a56ce773049fc5f4d9fdced9e7"} Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.383425 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnf24" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.400549 4873 scope.go:117] "RemoveContainer" containerID="45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.431644 4873 scope.go:117] "RemoveContainer" containerID="c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.434858 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.440941 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jm66x"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.443456 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.445672 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hv2j6"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.469261 4873 scope.go:117] "RemoveContainer" containerID="f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.469732 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b\": container with ID starting with f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b not found: ID does not exist" containerID="f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.469786 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b"} err="failed to get container status \"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b\": rpc error: code = NotFound desc = could not find container \"f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b\": container with ID starting with f9966e12727ab7b65da136d3e9ae3ccc835716701fdc8c61d0ce85583d0d264b not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.469819 4873 scope.go:117] "RemoveContainer" containerID="45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.470266 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.470381 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c\": container with ID starting with 45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c not found: ID does not exist" containerID="45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.470409 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c"} err="failed to get container status \"45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c\": rpc error: code = NotFound desc = could not find container \"45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c\": container with ID starting with 45818128c5d2d6d5769ffe637ae6ad9e378ad3fe2558a9d3cca64ab9d5a6861c not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.470434 4873 scope.go:117] "RemoveContainer" containerID="c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.470925 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a\": container with ID starting with c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a not found: ID does not exist" containerID="c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.470967 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a"} err="failed to get container status \"c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a\": rpc error: code = NotFound desc = could not find container \"c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a\": container with ID starting with c2f5ae8579d6418292a6c0f3c975976d1e536a3be874c8a9279ae73eeb01983a not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.470991 4873 scope.go:117] "RemoveContainer" containerID="74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.473064 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnf24"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.476744 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.481131 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86hhq"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.483610 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.485673 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gjn8l"] Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.488898 4873 scope.go:117] "RemoveContainer" containerID="6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.499447 4873 scope.go:117] "RemoveContainer" containerID="8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.512786 4873 scope.go:117] "RemoveContainer" containerID="74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.513344 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba\": container with ID starting with 74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba not found: ID does not exist" containerID="74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.513438 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba"} err="failed to get container status \"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba\": rpc error: code = NotFound desc = could not find container \"74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba\": container with ID starting with 74fb37ff12f72c03a5f94367ab0f45dbbc7b48f6d463c3dd733000d61c96b4ba not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.513520 4873 scope.go:117] "RemoveContainer" containerID="6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.513895 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9\": container with ID starting with 6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9 not found: ID does not exist" containerID="6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.513926 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9"} err="failed to get container status \"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9\": rpc error: code = NotFound desc = could not find container \"6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9\": container with ID starting with 6e4a6ad4fb008d881b5209c2ea81c21452449e9b024f3846af5079a4f4b5e1a9 not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.513948 4873 scope.go:117] "RemoveContainer" containerID="8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.514933 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b\": container with ID starting with 8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b not found: ID does not exist" containerID="8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.515014 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b"} err="failed to get container status \"8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b\": rpc error: code = NotFound desc = could not find container \"8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b\": container with ID starting with 8612f8546567d73e69164de7ce77990ef80b0c0484ccccf94d45b84576d5ac5b not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.515080 4873 scope.go:117] "RemoveContainer" containerID="1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.547611 4873 scope.go:117] "RemoveContainer" containerID="1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.556774 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026\": container with ID starting with 1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026 not found: ID does not exist" containerID="1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.556825 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026"} err="failed to get container status \"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026\": rpc error: code = NotFound desc = could not find container \"1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026\": container with ID starting with 1c029703d8f2912597cb5a128fcbab53d9cfdb22e857dd41250cc1badc58b026 not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.556853 4873 scope.go:117] "RemoveContainer" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.592333 4873 scope.go:117] "RemoveContainer" containerID="e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.609382 4873 scope.go:117] "RemoveContainer" containerID="9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.624533 4873 scope.go:117] "RemoveContainer" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.624972 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0\": container with ID starting with f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 not found: ID does not exist" containerID="f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625009 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0"} err="failed to get container status \"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0\": rpc error: code = NotFound desc = could not find container \"f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0\": container with ID starting with f88d143b0e1a50c7403539950ca7222c7ff265725c9ef86d1f4fc96a10db2db0 not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625038 4873 scope.go:117] "RemoveContainer" containerID="e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.625357 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2\": container with ID starting with e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2 not found: ID does not exist" containerID="e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625384 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2"} err="failed to get container status \"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2\": rpc error: code = NotFound desc = could not find container \"e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2\": container with ID starting with e509c49bf20ffebae8039816e8a5dbc8ef6b58ea2d9ef21e6d469e9a99d8fcf2 not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625401 4873 scope.go:117] "RemoveContainer" containerID="9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.625654 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e\": container with ID starting with 9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e not found: ID does not exist" containerID="9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625678 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e"} err="failed to get container status \"9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e\": rpc error: code = NotFound desc = could not find container \"9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e\": container with ID starting with 9629d2d8b3e9b42462cd22dea41c2d0741500af9092d6a0c14427b12d6440d6e not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.625696 4873 scope.go:117] "RemoveContainer" containerID="65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.636646 4873 scope.go:117] "RemoveContainer" containerID="1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.650654 4873 scope.go:117] "RemoveContainer" containerID="ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.670346 4873 scope.go:117] "RemoveContainer" containerID="65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.674484 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf\": container with ID starting with 65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf not found: ID does not exist" containerID="65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.674520 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf"} err="failed to get container status \"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf\": rpc error: code = NotFound desc = could not find container \"65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf\": container with ID starting with 65f03008f264ea89ece1ee912ca47401c748ee7054983199b5bd318655ded9bf not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.674546 4873 scope.go:117] "RemoveContainer" containerID="1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.674890 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a\": container with ID starting with 1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a not found: ID does not exist" containerID="1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.674915 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a"} err="failed to get container status \"1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a\": rpc error: code = NotFound desc = could not find container \"1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a\": container with ID starting with 1479a6032ed4c9b61367740ab02f5b5f34a581176449601b359feaf96fa47f0a not found: ID does not exist" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.674935 4873 scope.go:117] "RemoveContainer" containerID="ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210" Feb 19 09:48:42 crc kubenswrapper[4873]: E0219 09:48:42.675365 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210\": container with ID starting with ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210 not found: ID does not exist" containerID="ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210" Feb 19 09:48:42 crc kubenswrapper[4873]: I0219 09:48:42.675396 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210"} err="failed to get container status \"ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210\": rpc error: code = NotFound desc = could not find container \"ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210\": container with ID starting with ec81b30df0c0381b7218bf27c1e036ec8790d0ece7bfe1e31d1cbe023c4a6210 not found: ID does not exist" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.398472 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" event={"ID":"1d58439b-31c6-44df-a32d-48f0fcb6a361","Type":"ContainerStarted","Data":"21df5c2a915a009ceef25d6825f5c78f78a2145abfb94dd37b528ab70fa879e1"} Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.398795 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.403309 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.422611 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jt9rj" podStartSLOduration=2.422594324 podStartE2EDuration="2.422594324s" podCreationTimestamp="2026-02-19 09:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:43.417956367 +0000 UTC m=+232.707388035" watchObservedRunningTime="2026-02-19 09:48:43.422594324 +0000 UTC m=+232.712025962" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.492948 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0954690a-09f0-4b1b-be57-db87e9304488" path="/var/lib/kubelet/pods/0954690a-09f0-4b1b-be57-db87e9304488/volumes" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.494184 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" path="/var/lib/kubelet/pods/7423538a-949c-4995-bcf8-f2b6a2f8d914/volumes" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.495450 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" path="/var/lib/kubelet/pods/9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3/volumes" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.496977 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" path="/var/lib/kubelet/pods/d5d58373-fe5d-4afe-9da1-256843164ff4/volumes" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.497839 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" path="/var/lib/kubelet/pods/f905b5ea-71df-4b1c-997c-d68766bcfcfe/volumes" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623343 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623618 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623646 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623671 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623684 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623698 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623710 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623726 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623737 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623751 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623762 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623775 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623785 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623798 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623809 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623825 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623837 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623855 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623866 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="extract-content" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623879 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623890 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="extract-utilities" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623904 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623915 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623929 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623940 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: E0219 09:48:43.623952 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.623963 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.624125 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f905b5ea-71df-4b1c-997c-d68766bcfcfe" containerName="marketplace-operator" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.624154 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0954690a-09f0-4b1b-be57-db87e9304488" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.624168 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d58373-fe5d-4afe-9da1-256843164ff4" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.624180 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7423538a-949c-4995-bcf8-f2b6a2f8d914" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.624196 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9a2892-8bf0-451c-b46c-eac8a5fe1ce3" containerName="registry-server" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.625216 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.627399 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.640770 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.774700 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8j57\" (UniqueName: \"kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.774774 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.774822 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.830213 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xvshp"] Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.832891 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.834932 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvshp"] Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.835350 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.876007 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8j57\" (UniqueName: \"kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.876140 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.876212 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.876801 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.878387 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.894477 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8j57\" (UniqueName: \"kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57\") pod \"certified-operators-c2d4s\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.946125 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.976883 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-utilities\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.977054 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-catalog-content\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:43 crc kubenswrapper[4873]: I0219 09:48:43.977200 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnjw8\" (UniqueName: \"kubernetes.io/projected/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-kube-api-access-hnjw8\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.078773 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-catalog-content\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.079117 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnjw8\" (UniqueName: \"kubernetes.io/projected/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-kube-api-access-hnjw8\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.079255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-utilities\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.079760 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-utilities\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.081671 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-catalog-content\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.097524 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnjw8\" (UniqueName: \"kubernetes.io/projected/f9a9b521-3ed0-40c1-b38f-34c21bd9c242-kube-api-access-hnjw8\") pod \"redhat-marketplace-xvshp\" (UID: \"f9a9b521-3ed0-40c1-b38f-34c21bd9c242\") " pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.152632 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.377755 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 09:48:44 crc kubenswrapper[4873]: W0219 09:48:44.384810 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92377803_fb7e_42d1_ba93_54235a8f9409.slice/crio-a08dd4d7e39597a08d46eee691df8d3e8119bb68e610a407ed93ede91eb7581e WatchSource:0}: Error finding container a08dd4d7e39597a08d46eee691df8d3e8119bb68e610a407ed93ede91eb7581e: Status 404 returned error can't find the container with id a08dd4d7e39597a08d46eee691df8d3e8119bb68e610a407ed93ede91eb7581e Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.409683 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerStarted","Data":"a08dd4d7e39597a08d46eee691df8d3e8119bb68e610a407ed93ede91eb7581e"} Feb 19 09:48:44 crc kubenswrapper[4873]: I0219 09:48:44.521349 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvshp"] Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.419780 4873 generic.go:334] "Generic (PLEG): container finished" podID="92377803-fb7e-42d1-ba93-54235a8f9409" containerID="09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593" exitCode=0 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.419833 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerDied","Data":"09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593"} Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.423650 4873 generic.go:334] "Generic (PLEG): container finished" podID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" containerID="5ac66b7adc65a8e969f9606c8d6fe67c07864141a7835a55fb84b7ab451b4eaf" exitCode=0 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.423723 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvshp" event={"ID":"f9a9b521-3ed0-40c1-b38f-34c21bd9c242","Type":"ContainerDied","Data":"5ac66b7adc65a8e969f9606c8d6fe67c07864141a7835a55fb84b7ab451b4eaf"} Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.423762 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvshp" event={"ID":"f9a9b521-3ed0-40c1-b38f-34c21bd9c242","Type":"ContainerStarted","Data":"2b8f61d8cd4134ecf14385216fe4ce55e1fb1ff7e28906f0dc379ffee35e93de"} Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696227 4873 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696808 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117" gracePeriod=15 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696864 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3" gracePeriod=15 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696937 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11" gracePeriod=15 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696895 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e" gracePeriod=15 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.696919 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906" gracePeriod=15 Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698679 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.698905 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698919 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.698929 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698937 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.698951 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698958 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.698969 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698977 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.698986 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.698992 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.699003 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699009 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.699028 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699037 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699172 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699185 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699195 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699203 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699216 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699231 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699243 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.699345 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.699355 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.700568 4873 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.704641 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.712678 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.770287 4873 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.799873 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.799915 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.799931 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.799948 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.799973 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.800035 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.800077 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.800156 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901069 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901284 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901200 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901347 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901386 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901428 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901484 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901556 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901609 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901656 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901700 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901816 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901927 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.901969 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: I0219 09:48:45.902010 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:45 crc kubenswrapper[4873]: E0219 09:48:45.972806 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-xvshp.18959ce2b614bbec openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-xvshp,UID:f9a9b521-3ed0-40c1-b38f-34c21bd9c242,APIVersion:v1,ResourceVersion:29638,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 545ms (545ms including waiting). Image size: 1202767548 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:48:45.9715123 +0000 UTC m=+235.260943958,LastTimestamp:2026-02-19 09:48:45.9715123 +0000 UTC m=+235.260943958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.071240 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:46 crc kubenswrapper[4873]: W0219 09:48:46.086038 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-efad096894788afb5a1c67b77f3e0f83ab20ad0047d03e3c2dac5a8b464b7df4 WatchSource:0}: Error finding container efad096894788afb5a1c67b77f3e0f83ab20ad0047d03e3c2dac5a8b464b7df4: Status 404 returned error can't find the container with id efad096894788afb5a1c67b77f3e0f83ab20ad0047d03e3c2dac5a8b464b7df4 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.429372 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1cd15f918dfe991d39aa6c783668167ec9ec210784e46772d7b52f35e68404c5"} Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.429686 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"efad096894788afb5a1c67b77f3e0f83ab20ad0047d03e3c2dac5a8b464b7df4"} Feb 19 09:48:46 crc kubenswrapper[4873]: E0219 09:48:46.430350 4873 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.431419 4873 generic.go:334] "Generic (PLEG): container finished" podID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" containerID="15c977fb8314904bd6e91aa0233a7a89db6c5ac04c1a6328daf2247a377c9f30" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.431472 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvshp" event={"ID":"f9a9b521-3ed0-40c1-b38f-34c21bd9c242","Type":"ContainerDied","Data":"15c977fb8314904bd6e91aa0233a7a89db6c5ac04c1a6328daf2247a377c9f30"} Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.432047 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.433375 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.436532 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.437385 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.437482 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.437542 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.437596 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906" exitCode=2 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.437708 4873 scope.go:117] "RemoveContainer" containerID="7b5c2e53f92fb4360b78c1a61cdf6f27c617ddd4e99e6a2e079c7f4f3bcc1e90" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.439682 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerStarted","Data":"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2"} Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.440365 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.440733 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.441691 4873 generic.go:334] "Generic (PLEG): container finished" podID="dcd45a6e-fa80-4995-bab8-20796784d618" containerID="12e07b634f8034e56f9833d14110782d34f2365b31aa7149ce239d933850da51" exitCode=0 Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.441730 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcd45a6e-fa80-4995-bab8-20796784d618","Type":"ContainerDied","Data":"12e07b634f8034e56f9833d14110782d34f2365b31aa7149ce239d933850da51"} Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.442268 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.442638 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:46 crc kubenswrapper[4873]: I0219 09:48:46.442897 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.449950 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvshp" event={"ID":"f9a9b521-3ed0-40c1-b38f-34c21bd9c242","Type":"ContainerStarted","Data":"3ffce5a7300dbbb711c51003ccc1526dc7263702b970e7431a505507cdc556c3"} Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.451007 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.451398 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.451721 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.454644 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.465225 4873 generic.go:334] "Generic (PLEG): container finished" podID="92377803-fb7e-42d1-ba93-54235a8f9409" containerID="5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2" exitCode=0 Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.465561 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerDied","Data":"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2"} Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.466158 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.466358 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.466552 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.701963 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.702919 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.703679 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.704094 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825306 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir\") pod \"dcd45a6e-fa80-4995-bab8-20796784d618\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825642 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access\") pod \"dcd45a6e-fa80-4995-bab8-20796784d618\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825708 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock\") pod \"dcd45a6e-fa80-4995-bab8-20796784d618\" (UID: \"dcd45a6e-fa80-4995-bab8-20796784d618\") " Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825419 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcd45a6e-fa80-4995-bab8-20796784d618" (UID: "dcd45a6e-fa80-4995-bab8-20796784d618"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825885 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock" (OuterVolumeSpecName: "var-lock") pod "dcd45a6e-fa80-4995-bab8-20796784d618" (UID: "dcd45a6e-fa80-4995-bab8-20796784d618"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.825937 4873 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.846484 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcd45a6e-fa80-4995-bab8-20796784d618" (UID: "dcd45a6e-fa80-4995-bab8-20796784d618"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.926604 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd45a6e-fa80-4995-bab8-20796784d618-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:47 crc kubenswrapper[4873]: I0219 09:48:47.926637 4873 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcd45a6e-fa80-4995-bab8-20796784d618-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.116976 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.117999 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.118754 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.119163 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.119577 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.119801 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229175 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229227 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229310 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229561 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229618 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.229650 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.331026 4873 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.331062 4873 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.331072 4873 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.473533 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.474260 4873 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117" exitCode=0 Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.474339 4873 scope.go:117] "RemoveContainer" containerID="aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.474369 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.477201 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerStarted","Data":"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a"} Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.477898 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.478282 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.478608 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcd45a6e-fa80-4995-bab8-20796784d618","Type":"ContainerDied","Data":"3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456"} Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.478665 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3520c7c7b57ec4d387d5a52af5a8868db2183fc59373efaee14a1f7ca6894456" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.478747 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.479153 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.479196 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.488675 4873 scope.go:117] "RemoveContainer" containerID="b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.488715 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.488913 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.489114 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.489415 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.499092 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.499334 4873 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.499546 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.499747 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.500146 4873 scope.go:117] "RemoveContainer" containerID="f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.522697 4873 scope.go:117] "RemoveContainer" containerID="982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.539801 4873 scope.go:117] "RemoveContainer" containerID="cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.572453 4873 scope.go:117] "RemoveContainer" containerID="0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.598687 4873 scope.go:117] "RemoveContainer" containerID="aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.599177 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\": container with ID starting with aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3 not found: ID does not exist" containerID="aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.599204 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3"} err="failed to get container status \"aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\": rpc error: code = NotFound desc = could not find container \"aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3\": container with ID starting with aa7017087969fb77ee99eae01d8a0baf7383b98a6f09d835a7d8937ade8a1ca3 not found: ID does not exist" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.599223 4873 scope.go:117] "RemoveContainer" containerID="b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.605844 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\": container with ID starting with b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11 not found: ID does not exist" containerID="b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.605902 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11"} err="failed to get container status \"b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\": rpc error: code = NotFound desc = could not find container \"b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11\": container with ID starting with b13498b714daa09f61bc71b622ad5fec9253b4cf67b602fc089424bb79794e11 not found: ID does not exist" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.605937 4873 scope.go:117] "RemoveContainer" containerID="f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.606394 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\": container with ID starting with f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e not found: ID does not exist" containerID="f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.606429 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e"} err="failed to get container status \"f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\": rpc error: code = NotFound desc = could not find container \"f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e\": container with ID starting with f7bacb25129618d5756053587a398d7e6b60b1ebd6d218a88d3780851edf974e not found: ID does not exist" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.606450 4873 scope.go:117] "RemoveContainer" containerID="982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.606709 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\": container with ID starting with 982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906 not found: ID does not exist" containerID="982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.606738 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906"} err="failed to get container status \"982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\": rpc error: code = NotFound desc = could not find container \"982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906\": container with ID starting with 982ba46d11a4459c720ef1c2ac6668e88b58b96dca42d0085e4ae42b7a9c0906 not found: ID does not exist" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.606755 4873 scope.go:117] "RemoveContainer" containerID="cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.607014 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\": container with ID starting with cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117 not found: ID does not exist" containerID="cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.607051 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117"} err="failed to get container status \"cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\": rpc error: code = NotFound desc = could not find container \"cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117\": container with ID starting with cef5813f7cb2a1b36ecca7bb340e35f71b0b1a53edb19cd9bafda79e362ab117 not found: ID does not exist" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.607095 4873 scope.go:117] "RemoveContainer" containerID="0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279" Feb 19 09:48:48 crc kubenswrapper[4873]: E0219 09:48:48.607609 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\": container with ID starting with 0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279 not found: ID does not exist" containerID="0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279" Feb 19 09:48:48 crc kubenswrapper[4873]: I0219 09:48:48.607647 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279"} err="failed to get container status \"0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\": rpc error: code = NotFound desc = could not find container \"0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279\": container with ID starting with 0dad8f7655015996610223e9351f82201d43903a0c017a8be9a56240bc160279 not found: ID does not exist" Feb 19 09:48:49 crc kubenswrapper[4873]: I0219 09:48:49.490434 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.715542 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.716366 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.716736 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.716955 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.717179 4873 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:50 crc kubenswrapper[4873]: I0219 09:48:50.717209 4873 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.717445 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Feb 19 09:48:50 crc kubenswrapper[4873]: E0219 09:48:50.918057 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Feb 19 09:48:51 crc kubenswrapper[4873]: E0219 09:48:51.318899 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Feb 19 09:48:51 crc kubenswrapper[4873]: I0219 09:48:51.486433 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:51 crc kubenswrapper[4873]: I0219 09:48:51.486815 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:51 crc kubenswrapper[4873]: I0219 09:48:51.487055 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.120516 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.205986 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:48:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:48:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:48:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:48:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.206548 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.206975 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.207364 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.207705 4873 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.207732 4873 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:48:52 crc kubenswrapper[4873]: E0219 09:48:52.517951 4873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-xvshp.18959ce2b614bbec openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-xvshp,UID:f9a9b521-3ed0-40c1-b38f-34c21bd9c242,APIVersion:v1,ResourceVersion:29638,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 545ms (545ms including waiting). Image size: 1202767548 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:48:45.9715123 +0000 UTC m=+235.260943958,LastTimestamp:2026-02-19 09:48:45.9715123 +0000 UTC m=+235.260943958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:48:53 crc kubenswrapper[4873]: E0219 09:48:53.724845 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="3.2s" Feb 19 09:48:53 crc kubenswrapper[4873]: I0219 09:48:53.946393 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:53 crc kubenswrapper[4873]: I0219 09:48:53.946476 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.021979 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.022614 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.023485 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.024408 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.153332 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.153397 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.192654 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.193888 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.194416 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.194875 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.561178 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.562385 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.563041 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.563577 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.571317 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xvshp" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.572060 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.572649 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:54 crc kubenswrapper[4873]: I0219 09:48:54.573088 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:56 crc kubenswrapper[4873]: E0219 09:48:56.926833 4873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="6.4s" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.483544 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.484714 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.485388 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.485751 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.500644 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.500691 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:48:59 crc kubenswrapper[4873]: E0219 09:48:59.501377 4873 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.502079 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:48:59 crc kubenswrapper[4873]: W0219 09:48:59.532979 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-34b4fa3b0b6b818a8178957453a86635ecce5a1a6de8a875bd50c7fc86caf429 WatchSource:0}: Error finding container 34b4fa3b0b6b818a8178957453a86635ecce5a1a6de8a875bd50c7fc86caf429: Status 404 returned error can't find the container with id 34b4fa3b0b6b818a8178957453a86635ecce5a1a6de8a875bd50c7fc86caf429 Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.557255 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.557345 4873 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850" exitCode=1 Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.557424 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850"} Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.557980 4873 scope.go:117] "RemoveContainer" containerID="c9e9d7320b0e9d2d241675d65d266b0cffe1c60ea0a221261787b340ca04d850" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.559139 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34b4fa3b0b6b818a8178957453a86635ecce5a1a6de8a875bd50c7fc86caf429"} Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.559212 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.559570 4873 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.560006 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:48:59 crc kubenswrapper[4873]: I0219 09:48:59.560343 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.566658 4873 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="916aafadcf66dd970321f181045d851463d2f96fb391a8b557bae5ea0786a4ab" exitCode=0 Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.566743 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"916aafadcf66dd970321f181045d851463d2f96fb391a8b557bae5ea0786a4ab"} Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.567233 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.567281 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.567781 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: E0219 09:49:00.567920 4873 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.568381 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.568647 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.568969 4873 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.572772 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.572866 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af51a971d2d0522bb9002b49132470bda34bb1a02ca60cfc82d1929f48cb112d"} Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.573755 4873 status_manager.go:851] "Failed to get status for pod" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" pod="openshift-marketplace/certified-operators-c2d4s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2d4s\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.574173 4873 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.574487 4873 status_manager.go:851] "Failed to get status for pod" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:00 crc kubenswrapper[4873]: I0219 09:49:00.574993 4873 status_manager.go:851] "Failed to get status for pod" podUID="f9a9b521-3ed0-40c1-b38f-34c21bd9c242" pod="openshift-marketplace/redhat-marketplace-xvshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xvshp\": dial tcp 38.102.83.156:6443: connect: connection refused" Feb 19 09:49:01 crc kubenswrapper[4873]: I0219 09:49:01.589649 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c6339bec1dca8b314c006b7a4864177da334591576eecd00b09d4f3521a49b38"} Feb 19 09:49:01 crc kubenswrapper[4873]: I0219 09:49:01.589988 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3888a3b0e14c6dec78fe115c9eadc62ab000440126248d1d59d55eced4929f06"} Feb 19 09:49:01 crc kubenswrapper[4873]: I0219 09:49:01.590001 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b3dd4ce991aad56ebcb19abc048af169ba47d9e4a3dbc58596263e422f7727d"} Feb 19 09:49:01 crc kubenswrapper[4873]: I0219 09:49:01.590011 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f256ef8b0b3b7af748ec303bc2ecdf5f84bdedd23b3bed4d545fb0d12fab15ff"} Feb 19 09:49:02 crc kubenswrapper[4873]: I0219 09:49:02.599978 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b6b3848cd78789578434ba6cfb13eae1a718d6e12f52db62ef820a5ea154e2a"} Feb 19 09:49:02 crc kubenswrapper[4873]: I0219 09:49:02.600290 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:02 crc kubenswrapper[4873]: I0219 09:49:02.600307 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:02 crc kubenswrapper[4873]: I0219 09:49:02.600501 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:04 crc kubenswrapper[4873]: I0219 09:49:04.502649 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:04 crc kubenswrapper[4873]: I0219 09:49:04.502878 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:04 crc kubenswrapper[4873]: I0219 09:49:04.510228 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:07 crc kubenswrapper[4873]: I0219 09:49:07.614042 4873 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:07 crc kubenswrapper[4873]: I0219 09:49:07.658850 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a9bbc9c-4111-45d8-a138-a15a31edae0a" Feb 19 09:49:07 crc kubenswrapper[4873]: I0219 09:49:07.998798 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.644303 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.644901 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.647188 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a9bbc9c-4111-45d8-a138-a15a31edae0a" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.655327 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.656173 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.656218 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.669888 4873 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://f256ef8b0b3b7af748ec303bc2ecdf5f84bdedd23b3bed4d545fb0d12fab15ff" Feb 19 09:49:08 crc kubenswrapper[4873]: I0219 09:49:08.669942 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:09 crc kubenswrapper[4873]: I0219 09:49:09.650283 4873 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:09 crc kubenswrapper[4873]: I0219 09:49:09.650635 4873 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d6df925a-1654-4ade-a300-97c316b0867f" Feb 19 09:49:09 crc kubenswrapper[4873]: I0219 09:49:09.655228 4873 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a9bbc9c-4111-45d8-a138-a15a31edae0a" Feb 19 09:49:17 crc kubenswrapper[4873]: I0219 09:49:17.415809 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.107083 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.270829 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.413203 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.448129 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.520232 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.592126 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.619086 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.656008 4873 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.656087 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.692183 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.831174 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 09:49:18 crc kubenswrapper[4873]: I0219 09:49:18.894703 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.146755 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.321145 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.405568 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.523386 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.631074 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.675243 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.721733 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.822939 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.896002 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 09:49:19 crc kubenswrapper[4873]: I0219 09:49:19.994751 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.151436 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.261402 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.284700 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.403825 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.405436 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.516491 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.528971 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 09:49:20 crc kubenswrapper[4873]: I0219 09:49:20.842881 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.084966 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.183228 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.212625 4873 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.214924 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xvshp" podStartSLOduration=36.736770484 podStartE2EDuration="38.214905505s" podCreationTimestamp="2026-02-19 09:48:43 +0000 UTC" firstStartedPulling="2026-02-19 09:48:45.425537246 +0000 UTC m=+234.714968884" lastFinishedPulling="2026-02-19 09:48:46.903672267 +0000 UTC m=+236.193103905" observedRunningTime="2026-02-19 09:49:07.721762364 +0000 UTC m=+257.011194002" watchObservedRunningTime="2026-02-19 09:49:21.214905505 +0000 UTC m=+270.504337163" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.216226 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c2d4s" podStartSLOduration=35.72569925 podStartE2EDuration="38.216215268s" podCreationTimestamp="2026-02-19 09:48:43 +0000 UTC" firstStartedPulling="2026-02-19 09:48:45.422859198 +0000 UTC m=+234.712290836" lastFinishedPulling="2026-02-19 09:48:47.913375216 +0000 UTC m=+237.202806854" observedRunningTime="2026-02-19 09:49:07.60244869 +0000 UTC m=+256.891880328" watchObservedRunningTime="2026-02-19 09:49:21.216215268 +0000 UTC m=+270.505646936" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.217938 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.217980 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.226885 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.228058 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.228660 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.246834 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.246818465 podStartE2EDuration="14.246818465s" podCreationTimestamp="2026-02-19 09:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:49:21.246095067 +0000 UTC m=+270.535526725" watchObservedRunningTime="2026-02-19 09:49:21.246818465 +0000 UTC m=+270.536250103" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.393056 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.463138 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.626925 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.638011 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.751467 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.755462 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.821616 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.894010 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.902062 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.903170 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:49:21 crc kubenswrapper[4873]: I0219 09:49:21.991617 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.001434 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.010434 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.021727 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.150957 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.205686 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.274735 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.318183 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.346045 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.371576 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.375533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.395812 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.433173 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.469781 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.470286 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.479525 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.575967 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.667981 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.736086 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.810723 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.816678 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.832943 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.833090 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.872091 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 09:49:22 crc kubenswrapper[4873]: I0219 09:49:22.967830 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.014080 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.017576 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.062113 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.146224 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.290773 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.521172 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.572821 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.581083 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.641614 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.641652 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.724562 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.731493 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.830070 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 09:49:23 crc kubenswrapper[4873]: I0219 09:49:23.832008 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.036218 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.123970 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.210965 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.324131 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.330584 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.341532 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.352084 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.525695 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.540597 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.542540 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.555776 4873 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.572424 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.692442 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.694956 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.711935 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.718911 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.740054 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.740512 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.781487 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.827254 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.932628 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.952628 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 09:49:24 crc kubenswrapper[4873]: I0219 09:49:24.994796 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.006557 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.009013 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.124619 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.159312 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.235264 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.333278 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.354836 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.357605 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.397899 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.508470 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.527215 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.767695 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.892661 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.916407 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 09:49:25 crc kubenswrapper[4873]: I0219 09:49:25.943327 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.053007 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.230442 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.413967 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.435527 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.456669 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.485655 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.486356 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.525627 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.601036 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.697794 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.699373 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.713666 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.806824 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.820858 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.883401 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.909205 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 09:49:26 crc kubenswrapper[4873]: I0219 09:49:26.932592 4873 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.075071 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.083264 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.115396 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.223171 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.259637 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.272169 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.333354 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.339023 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.389091 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.423253 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.528453 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.537965 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.563928 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.669907 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.719881 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.739228 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.761569 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.787113 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.798345 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.818253 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 09:49:27 crc kubenswrapper[4873]: I0219 09:49:27.906664 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.151169 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.227341 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.273025 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.287347 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.471898 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.486996 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.489603 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.492181 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.519675 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.533039 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.649337 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.678527 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.706856 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.741223 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.786027 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 09:49:28 crc kubenswrapper[4873]: I0219 09:49:28.947325 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.004245 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.027976 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.031382 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.031843 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.166722 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.201341 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.267385 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.297326 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.395388 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.443414 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.487801 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.495916 4873 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.507142 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.522796 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.594628 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.666349 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.740784 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.902991 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 09:49:29 crc kubenswrapper[4873]: I0219 09:49:29.998827 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.010828 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.065683 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.071372 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.142831 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.230496 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.304665 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.346835 4873 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.347051 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1cd15f918dfe991d39aa6c783668167ec9ec210784e46772d7b52f35e68404c5" gracePeriod=5 Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.396511 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.415415 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.474725 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.518585 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.542446 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.590243 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.705786 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.845190 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.904495 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.918491 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.925217 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 09:49:30 crc kubenswrapper[4873]: I0219 09:49:30.991630 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.008496 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.011651 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.026156 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.096184 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.098037 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.119620 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.244061 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.278292 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.361009 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.430217 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.483059 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.534637 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.692504 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.716857 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 09:49:31 crc kubenswrapper[4873]: I0219 09:49:31.976621 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.027126 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.035736 4873 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.201595 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.540682 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.551088 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.574893 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.632600 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.646355 4873 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.674772 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.723539 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.767289 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.815568 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.860774 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.875817 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 09:49:32 crc kubenswrapper[4873]: I0219 09:49:32.965285 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.128509 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.133360 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.146647 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.150514 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.304898 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.337577 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.346667 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.440678 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.505929 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 09:49:33 crc kubenswrapper[4873]: I0219 09:49:33.968156 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 09:49:35 crc kubenswrapper[4873]: I0219 09:49:35.809499 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 09:49:35 crc kubenswrapper[4873]: I0219 09:49:35.809805 4873 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1cd15f918dfe991d39aa6c783668167ec9ec210784e46772d7b52f35e68404c5" exitCode=137 Feb 19 09:49:35 crc kubenswrapper[4873]: I0219 09:49:35.917414 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 09:49:35 crc kubenswrapper[4873]: I0219 09:49:35.917615 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.061808 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.061949 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.061947 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062010 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062041 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062056 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062158 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062165 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062166 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062455 4873 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062482 4873 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062501 4873 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.062521 4873 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.070621 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.163443 4873 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.817808 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.817879 4873 scope.go:117] "RemoveContainer" containerID="1cd15f918dfe991d39aa6c783668167ec9ec210784e46772d7b52f35e68404c5" Feb 19 09:49:36 crc kubenswrapper[4873]: I0219 09:49:36.818070 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:49:37 crc kubenswrapper[4873]: I0219 09:49:37.495559 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 09:49:51 crc kubenswrapper[4873]: I0219 09:49:51.183372 4873 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.738071 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.738880 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerName="controller-manager" containerID="cri-o://ca0ba083f2d897c6b2f519cbc9b73b7e76a6575165553e074c67d17692757e96" gracePeriod=30 Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.741504 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.741708 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" containerID="cri-o://444f0cf49cee0a425b45e35836ef80b67c64300d173966d869b2bf6f32c4f2d2" gracePeriod=30 Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.752191 4873 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qltqp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.752246 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.974799 4873 generic.go:334] "Generic (PLEG): container finished" podID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerID="ca0ba083f2d897c6b2f519cbc9b73b7e76a6575165553e074c67d17692757e96" exitCode=0 Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.974864 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" event={"ID":"c8c1d3a6-23fd-4526-8892-0add23b09a9a","Type":"ContainerDied","Data":"ca0ba083f2d897c6b2f519cbc9b73b7e76a6575165553e074c67d17692757e96"} Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.976710 4873 generic.go:334] "Generic (PLEG): container finished" podID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerID="444f0cf49cee0a425b45e35836ef80b67c64300d173966d869b2bf6f32c4f2d2" exitCode=0 Feb 19 09:50:02 crc kubenswrapper[4873]: I0219 09:50:02.976755 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" event={"ID":"bd468f98-7720-4f9a-972f-684b96f4f90f","Type":"ContainerDied","Data":"444f0cf49cee0a425b45e35836ef80b67c64300d173966d869b2bf6f32c4f2d2"} Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.075221 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.080002 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176079 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert\") pod \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176152 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmb7k\" (UniqueName: \"kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k\") pod \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176201 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles\") pod \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176229 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config\") pod \"bd468f98-7720-4f9a-972f-684b96f4f90f\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176285 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca\") pod \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176315 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert\") pod \"bd468f98-7720-4f9a-972f-684b96f4f90f\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176337 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config\") pod \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\" (UID: \"c8c1d3a6-23fd-4526-8892-0add23b09a9a\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176929 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c8c1d3a6-23fd-4526-8892-0add23b09a9a" (UID: "c8c1d3a6-23fd-4526-8892-0add23b09a9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176945 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config" (OuterVolumeSpecName: "config") pod "bd468f98-7720-4f9a-972f-684b96f4f90f" (UID: "bd468f98-7720-4f9a-972f-684b96f4f90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.176987 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config" (OuterVolumeSpecName: "config") pod "c8c1d3a6-23fd-4526-8892-0add23b09a9a" (UID: "c8c1d3a6-23fd-4526-8892-0add23b09a9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177052 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll4wz\" (UniqueName: \"kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz\") pod \"bd468f98-7720-4f9a-972f-684b96f4f90f\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177071 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca\") pod \"bd468f98-7720-4f9a-972f-684b96f4f90f\" (UID: \"bd468f98-7720-4f9a-972f-684b96f4f90f\") " Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177507 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "c8c1d3a6-23fd-4526-8892-0add23b09a9a" (UID: "c8c1d3a6-23fd-4526-8892-0add23b09a9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177720 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd468f98-7720-4f9a-972f-684b96f4f90f" (UID: "bd468f98-7720-4f9a-972f-684b96f4f90f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177865 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177883 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177903 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177916 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c1d3a6-23fd-4526-8892-0add23b09a9a-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.177929 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd468f98-7720-4f9a-972f-684b96f4f90f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.182753 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8c1d3a6-23fd-4526-8892-0add23b09a9a" (UID: "c8c1d3a6-23fd-4526-8892-0add23b09a9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.190570 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd468f98-7720-4f9a-972f-684b96f4f90f" (UID: "bd468f98-7720-4f9a-972f-684b96f4f90f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.191375 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k" (OuterVolumeSpecName: "kube-api-access-mmb7k") pod "c8c1d3a6-23fd-4526-8892-0add23b09a9a" (UID: "c8c1d3a6-23fd-4526-8892-0add23b09a9a"). InnerVolumeSpecName "kube-api-access-mmb7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.192064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz" (OuterVolumeSpecName: "kube-api-access-ll4wz") pod "bd468f98-7720-4f9a-972f-684b96f4f90f" (UID: "bd468f98-7720-4f9a-972f-684b96f4f90f"). InnerVolumeSpecName "kube-api-access-ll4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.279506 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd468f98-7720-4f9a-972f-684b96f4f90f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.279541 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll4wz\" (UniqueName: \"kubernetes.io/projected/bd468f98-7720-4f9a-972f-684b96f4f90f-kube-api-access-ll4wz\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.279556 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c1d3a6-23fd-4526-8892-0add23b09a9a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.279568 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmb7k\" (UniqueName: \"kubernetes.io/projected/c8c1d3a6-23fd-4526-8892-0add23b09a9a-kube-api-access-mmb7k\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.982457 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.983053 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvxgz" event={"ID":"c8c1d3a6-23fd-4526-8892-0add23b09a9a","Type":"ContainerDied","Data":"3919526da5da79321b05444b65501cd491975ca30007c3620a85b734545d5c95"} Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.983090 4873 scope.go:117] "RemoveContainer" containerID="ca0ba083f2d897c6b2f519cbc9b73b7e76a6575165553e074c67d17692757e96" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.986086 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" event={"ID":"bd468f98-7720-4f9a-972f-684b96f4f90f","Type":"ContainerDied","Data":"00d137182546ceb731d1231ff4489ff44e56001f5469f15e0d3bd78dd28af61d"} Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.986192 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp" Feb 19 09:50:03 crc kubenswrapper[4873]: I0219 09:50:03.998911 4873 scope.go:117] "RemoveContainer" containerID="444f0cf49cee0a425b45e35836ef80b67c64300d173966d869b2bf6f32c4f2d2" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.003703 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.008567 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvxgz"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.018700 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.022030 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qltqp"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.053996 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c"] Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.054293 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" containerName="installer" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054308 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" containerName="installer" Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.054318 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054323 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.054331 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerName="controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054338 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerName="controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.054353 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054358 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054454 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" containerName="controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054464 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" containerName="route-controller-manager" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054475 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd45a6e-fa80-4995-bab8-20796784d618" containerName="installer" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054485 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.054853 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.055882 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-6mqsf"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.056504 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.058575 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.058745 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.058927 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.058956 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.062969 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.063505 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.065042 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.065880 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.065987 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.065928 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.066050 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.066785 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.071379 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.071880 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.073057 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-6mqsf"] Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087698 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087729 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087750 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhlsg\" (UniqueName: \"kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087790 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087810 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdgh\" (UniqueName: \"kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087828 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087849 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.087885 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.125608 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-6mqsf"] Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.125939 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-lhlsg proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" podUID="c217268d-be2f-4eca-b26d-f6659ef4c9ce" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.133820 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c"] Feb 19 09:50:04 crc kubenswrapper[4873]: E0219 09:50:04.134201 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-vwdgh serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" podUID="cd6c625f-090c-449a-97c8-d67aa7a5ea3b" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188639 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188675 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhlsg\" (UniqueName: \"kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188706 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188736 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188762 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdgh\" (UniqueName: \"kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188792 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188868 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.188891 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.190203 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.190221 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.190600 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.190654 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.191081 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.192672 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.198728 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.206130 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdgh\" (UniqueName: \"kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh\") pod \"route-controller-manager-595695d48d-qrq4c\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:04 crc kubenswrapper[4873]: I0219 09:50:04.206949 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhlsg\" (UniqueName: \"kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg\") pod \"controller-manager-846b877c48-6mqsf\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:04.999963 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.000036 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.010955 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.019013 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199308 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles\") pod \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199373 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config\") pod \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199417 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhlsg\" (UniqueName: \"kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg\") pod \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199461 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config\") pod \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199480 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca\") pod \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199494 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdgh\" (UniqueName: \"kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh\") pod \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199519 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert\") pod \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199536 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert\") pod \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\" (UID: \"c217268d-be2f-4eca-b26d-f6659ef4c9ce\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199565 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca\") pod \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\" (UID: \"cd6c625f-090c-449a-97c8-d67aa7a5ea3b\") " Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.199963 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config" (OuterVolumeSpecName: "config") pod "cd6c625f-090c-449a-97c8-d67aa7a5ea3b" (UID: "cd6c625f-090c-449a-97c8-d67aa7a5ea3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.200151 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd6c625f-090c-449a-97c8-d67aa7a5ea3b" (UID: "cd6c625f-090c-449a-97c8-d67aa7a5ea3b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.200343 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c217268d-be2f-4eca-b26d-f6659ef4c9ce" (UID: "c217268d-be2f-4eca-b26d-f6659ef4c9ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.200455 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "c217268d-be2f-4eca-b26d-f6659ef4c9ce" (UID: "c217268d-be2f-4eca-b26d-f6659ef4c9ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.200898 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config" (OuterVolumeSpecName: "config") pod "c217268d-be2f-4eca-b26d-f6659ef4c9ce" (UID: "c217268d-be2f-4eca-b26d-f6659ef4c9ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.203459 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh" (OuterVolumeSpecName: "kube-api-access-vwdgh") pod "cd6c625f-090c-449a-97c8-d67aa7a5ea3b" (UID: "cd6c625f-090c-449a-97c8-d67aa7a5ea3b"). InnerVolumeSpecName "kube-api-access-vwdgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.203960 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg" (OuterVolumeSpecName: "kube-api-access-lhlsg") pod "c217268d-be2f-4eca-b26d-f6659ef4c9ce" (UID: "c217268d-be2f-4eca-b26d-f6659ef4c9ce"). InnerVolumeSpecName "kube-api-access-lhlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.205278 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c217268d-be2f-4eca-b26d-f6659ef4c9ce" (UID: "c217268d-be2f-4eca-b26d-f6659ef4c9ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.216229 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd6c625f-090c-449a-97c8-d67aa7a5ea3b" (UID: "cd6c625f-090c-449a-97c8-d67aa7a5ea3b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.301326 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.301921 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.302087 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhlsg\" (UniqueName: \"kubernetes.io/projected/c217268d-be2f-4eca-b26d-f6659ef4c9ce-kube-api-access-lhlsg\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.302368 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.302547 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c217268d-be2f-4eca-b26d-f6659ef4c9ce-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.302712 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdgh\" (UniqueName: \"kubernetes.io/projected/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-kube-api-access-vwdgh\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.303012 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.303257 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c217268d-be2f-4eca-b26d-f6659ef4c9ce-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.303455 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd6c625f-090c-449a-97c8-d67aa7a5ea3b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.495754 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd468f98-7720-4f9a-972f-684b96f4f90f" path="/var/lib/kubelet/pods/bd468f98-7720-4f9a-972f-684b96f4f90f/volumes" Feb 19 09:50:05 crc kubenswrapper[4873]: I0219 09:50:05.496765 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c1d3a6-23fd-4526-8892-0add23b09a9a" path="/var/lib/kubelet/pods/c8c1d3a6-23fd-4526-8892-0add23b09a9a/volumes" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.009666 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.010218 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-6mqsf" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.063449 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-6mqsf"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.074244 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.075698 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.078760 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.079035 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.082507 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-6mqsf"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.088551 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.089434 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.096791 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.096829 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.096918 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.098750 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.099229 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.102740 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-qrq4c"] Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.117331 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.117388 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkgk\" (UniqueName: \"kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.117412 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.117523 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.117544 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.218286 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.218365 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkgk\" (UniqueName: \"kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.218397 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.218421 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.218444 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.219747 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.220223 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.220937 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.224122 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.246708 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkgk\" (UniqueName: \"kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk\") pod \"controller-manager-7f6bd8fd79-7bjjb\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.409499 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:06 crc kubenswrapper[4873]: I0219 09:50:06.615591 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:06 crc kubenswrapper[4873]: W0219 09:50:06.629328 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6f3c0e1_ae96_4845_bafc_25bf413d357b.slice/crio-6f762e3ceb0d2e022ff5de2625a5e24c06e6239d60aae50fd9e175814ed573e9 WatchSource:0}: Error finding container 6f762e3ceb0d2e022ff5de2625a5e24c06e6239d60aae50fd9e175814ed573e9: Status 404 returned error can't find the container with id 6f762e3ceb0d2e022ff5de2625a5e24c06e6239d60aae50fd9e175814ed573e9 Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.015801 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" event={"ID":"b6f3c0e1-ae96-4845-bafc-25bf413d357b","Type":"ContainerStarted","Data":"43f41ad0522c975ab5de8d4b7a10f731cd1ee469d6711e1f806948b3c65b26a6"} Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.016210 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" event={"ID":"b6f3c0e1-ae96-4845-bafc-25bf413d357b","Type":"ContainerStarted","Data":"6f762e3ceb0d2e022ff5de2625a5e24c06e6239d60aae50fd9e175814ed573e9"} Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.017600 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.034249 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.070921 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" podStartSLOduration=3.070901876 podStartE2EDuration="3.070901876s" podCreationTimestamp="2026-02-19 09:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:07.043394156 +0000 UTC m=+316.332825804" watchObservedRunningTime="2026-02-19 09:50:07.070901876 +0000 UTC m=+316.360333514" Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.495981 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c217268d-be2f-4eca-b26d-f6659ef4c9ce" path="/var/lib/kubelet/pods/c217268d-be2f-4eca-b26d-f6659ef4c9ce/volumes" Feb 19 09:50:07 crc kubenswrapper[4873]: I0219 09:50:07.496627 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6c625f-090c-449a-97c8-d67aa7a5ea3b" path="/var/lib/kubelet/pods/cd6c625f-090c-449a-97c8-d67aa7a5ea3b/volumes" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.058223 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.059212 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.062214 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.062334 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.062382 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.063070 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.063092 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.065955 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.074847 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.243885 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.243992 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.244032 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzg9\" (UniqueName: \"kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.244095 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.345799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.345929 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzg9\" (UniqueName: \"kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.346014 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.346092 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.347320 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.348356 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.354976 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.370328 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzg9\" (UniqueName: \"kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9\") pod \"route-controller-manager-6685f4fd5b-cmw9x\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.389134 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:08 crc kubenswrapper[4873]: I0219 09:50:08.593693 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:50:09 crc kubenswrapper[4873]: I0219 09:50:09.030150 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" event={"ID":"3db2587a-f66b-4e3e-855f-9973e9b28743","Type":"ContainerStarted","Data":"bc6dcfd23752a86e6e1bb6f7dd9d0bc1aa50316ea10ce09d6252d4ba5b4e6b9f"} Feb 19 09:50:09 crc kubenswrapper[4873]: I0219 09:50:09.030206 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" event={"ID":"3db2587a-f66b-4e3e-855f-9973e9b28743","Type":"ContainerStarted","Data":"456ae351a251d151fea49e6f19e6eb9dec882c42d7b5599fb86ab622c2053df9"} Feb 19 09:50:09 crc kubenswrapper[4873]: I0219 09:50:09.030367 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:09 crc kubenswrapper[4873]: I0219 09:50:09.053634 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" podStartSLOduration=5.053609115 podStartE2EDuration="5.053609115s" podCreationTimestamp="2026-02-19 09:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:09.051084672 +0000 UTC m=+318.340516340" watchObservedRunningTime="2026-02-19 09:50:09.053609115 +0000 UTC m=+318.343040793" Feb 19 09:50:09 crc kubenswrapper[4873]: I0219 09:50:09.098010 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:50:22 crc kubenswrapper[4873]: I0219 09:50:22.741588 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:22 crc kubenswrapper[4873]: I0219 09:50:22.742480 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" podUID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" containerName="controller-manager" containerID="cri-o://43f41ad0522c975ab5de8d4b7a10f731cd1ee469d6711e1f806948b3c65b26a6" gracePeriod=30 Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.126706 4873 generic.go:334] "Generic (PLEG): container finished" podID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" containerID="43f41ad0522c975ab5de8d4b7a10f731cd1ee469d6711e1f806948b3c65b26a6" exitCode=0 Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.126829 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" event={"ID":"b6f3c0e1-ae96-4845-bafc-25bf413d357b","Type":"ContainerDied","Data":"43f41ad0522c975ab5de8d4b7a10f731cd1ee469d6711e1f806948b3c65b26a6"} Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.261678 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.456407 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert\") pod \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.456660 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca\") pod \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.456692 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config\") pod \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.456716 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles\") pod \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.456761 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkgk\" (UniqueName: \"kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk\") pod \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\" (UID: \"b6f3c0e1-ae96-4845-bafc-25bf413d357b\") " Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.457533 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6f3c0e1-ae96-4845-bafc-25bf413d357b" (UID: "b6f3c0e1-ae96-4845-bafc-25bf413d357b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.457546 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b6f3c0e1-ae96-4845-bafc-25bf413d357b" (UID: "b6f3c0e1-ae96-4845-bafc-25bf413d357b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.457589 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config" (OuterVolumeSpecName: "config") pod "b6f3c0e1-ae96-4845-bafc-25bf413d357b" (UID: "b6f3c0e1-ae96-4845-bafc-25bf413d357b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.462451 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk" (OuterVolumeSpecName: "kube-api-access-ltkgk") pod "b6f3c0e1-ae96-4845-bafc-25bf413d357b" (UID: "b6f3c0e1-ae96-4845-bafc-25bf413d357b"). InnerVolumeSpecName "kube-api-access-ltkgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.462638 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6f3c0e1-ae96-4845-bafc-25bf413d357b" (UID: "b6f3c0e1-ae96-4845-bafc-25bf413d357b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.558632 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.558687 4873 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.558707 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkgk\" (UniqueName: \"kubernetes.io/projected/b6f3c0e1-ae96-4845-bafc-25bf413d357b-kube-api-access-ltkgk\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.558721 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f3c0e1-ae96-4845-bafc-25bf413d357b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:23 crc kubenswrapper[4873]: I0219 09:50:23.558734 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6f3c0e1-ae96-4845-bafc-25bf413d357b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.070365 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-4bs89"] Feb 19 09:50:24 crc kubenswrapper[4873]: E0219 09:50:24.070694 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" containerName="controller-manager" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.070715 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" containerName="controller-manager" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.070869 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" containerName="controller-manager" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.071467 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.085084 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-4bs89"] Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.134868 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" event={"ID":"b6f3c0e1-ae96-4845-bafc-25bf413d357b","Type":"ContainerDied","Data":"6f762e3ceb0d2e022ff5de2625a5e24c06e6239d60aae50fd9e175814ed573e9"} Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.134925 4873 scope.go:117] "RemoveContainer" containerID="43f41ad0522c975ab5de8d4b7a10f731cd1ee469d6711e1f806948b3c65b26a6" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.135039 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.159624 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.167959 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-7bjjb"] Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.168732 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8af9160-21ef-4a41-8ffe-513930b969d0-serving-cert\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.168801 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/d8af9160-21ef-4a41-8ffe-513930b969d0-kube-api-access-46jnl\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.168847 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-client-ca\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.168971 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-proxy-ca-bundles\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.169013 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-config\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.270443 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8af9160-21ef-4a41-8ffe-513930b969d0-serving-cert\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.270866 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-client-ca\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.270903 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/d8af9160-21ef-4a41-8ffe-513930b969d0-kube-api-access-46jnl\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.271064 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-proxy-ca-bundles\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.271130 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-config\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.272545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-client-ca\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.272791 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-proxy-ca-bundles\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.274811 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8af9160-21ef-4a41-8ffe-513930b969d0-config\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.277379 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8af9160-21ef-4a41-8ffe-513930b969d0-serving-cert\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.295199 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/d8af9160-21ef-4a41-8ffe-513930b969d0-kube-api-access-46jnl\") pod \"controller-manager-846b877c48-4bs89\" (UID: \"d8af9160-21ef-4a41-8ffe-513930b969d0\") " pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.398022 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:24 crc kubenswrapper[4873]: I0219 09:50:24.987697 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846b877c48-4bs89"] Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.141579 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" event={"ID":"d8af9160-21ef-4a41-8ffe-513930b969d0","Type":"ContainerStarted","Data":"b4ec1545226012a614754f2e26b31dc61ea2543f74b40eab1ebee163b1c0b92a"} Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.141982 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.141996 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" event={"ID":"d8af9160-21ef-4a41-8ffe-513930b969d0","Type":"ContainerStarted","Data":"ab475893ee1c93f0632441d26257315871e2f293d6676cdd8efdf0b34cc52214"} Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.149175 4873 patch_prober.go:28] interesting pod/controller-manager-846b877c48-4bs89 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.149216 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" podUID="d8af9160-21ef-4a41-8ffe-513930b969d0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.160906 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" podStartSLOduration=3.160887639 podStartE2EDuration="3.160887639s" podCreationTimestamp="2026-02-19 09:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:25.158257073 +0000 UTC m=+334.447688711" watchObservedRunningTime="2026-02-19 09:50:25.160887639 +0000 UTC m=+334.450319277" Feb 19 09:50:25 crc kubenswrapper[4873]: I0219 09:50:25.496744 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6f3c0e1-ae96-4845-bafc-25bf413d357b" path="/var/lib/kubelet/pods/b6f3c0e1-ae96-4845-bafc-25bf413d357b/volumes" Feb 19 09:50:26 crc kubenswrapper[4873]: I0219 09:50:26.154438 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-846b877c48-4bs89" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.292881 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zk9wc"] Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.294338 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.303164 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.312264 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk9wc"] Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.415276 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-utilities\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.415356 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kfj\" (UniqueName: \"kubernetes.io/projected/5f466b31-21ca-4f19-9b73-72cfb7c68d55-kube-api-access-j8kfj\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.415420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-catalog-content\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.516606 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-utilities\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.516674 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kfj\" (UniqueName: \"kubernetes.io/projected/5f466b31-21ca-4f19-9b73-72cfb7c68d55-kube-api-access-j8kfj\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.516702 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-catalog-content\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.517174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-catalog-content\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.517313 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f466b31-21ca-4f19-9b73-72cfb7c68d55-utilities\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.538955 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kfj\" (UniqueName: \"kubernetes.io/projected/5f466b31-21ca-4f19-9b73-72cfb7c68d55-kube-api-access-j8kfj\") pod \"community-operators-zk9wc\" (UID: \"5f466b31-21ca-4f19-9b73-72cfb7c68d55\") " pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:40 crc kubenswrapper[4873]: I0219 09:50:40.607858 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:41 crc kubenswrapper[4873]: I0219 09:50:41.015847 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zk9wc"] Feb 19 09:50:41 crc kubenswrapper[4873]: W0219 09:50:41.021369 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f466b31_21ca_4f19_9b73_72cfb7c68d55.slice/crio-37110418ece1ef9b6ea5a86e0c098f88d36ad6c88b57c3d8af76a46e26ee0033 WatchSource:0}: Error finding container 37110418ece1ef9b6ea5a86e0c098f88d36ad6c88b57c3d8af76a46e26ee0033: Status 404 returned error can't find the container with id 37110418ece1ef9b6ea5a86e0c098f88d36ad6c88b57c3d8af76a46e26ee0033 Feb 19 09:50:41 crc kubenswrapper[4873]: I0219 09:50:41.231070 4873 generic.go:334] "Generic (PLEG): container finished" podID="5f466b31-21ca-4f19-9b73-72cfb7c68d55" containerID="235a4cc9de0ce0d9c318ba6b2fc1c727f9e9bc32ac5707fc258224436b9deb00" exitCode=0 Feb 19 09:50:41 crc kubenswrapper[4873]: I0219 09:50:41.231136 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk9wc" event={"ID":"5f466b31-21ca-4f19-9b73-72cfb7c68d55","Type":"ContainerDied","Data":"235a4cc9de0ce0d9c318ba6b2fc1c727f9e9bc32ac5707fc258224436b9deb00"} Feb 19 09:50:41 crc kubenswrapper[4873]: I0219 09:50:41.231376 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk9wc" event={"ID":"5f466b31-21ca-4f19-9b73-72cfb7c68d55","Type":"ContainerStarted","Data":"37110418ece1ef9b6ea5a86e0c098f88d36ad6c88b57c3d8af76a46e26ee0033"} Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.236951 4873 generic.go:334] "Generic (PLEG): container finished" podID="5f466b31-21ca-4f19-9b73-72cfb7c68d55" containerID="32511e960070046f73db7a65487a607b53bb5a81f10c6e1cb939ee3d03b1c42f" exitCode=0 Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.237168 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk9wc" event={"ID":"5f466b31-21ca-4f19-9b73-72cfb7c68d55","Type":"ContainerDied","Data":"32511e960070046f73db7a65487a607b53bb5a81f10c6e1cb939ee3d03b1c42f"} Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.693971 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-prw4c"] Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.695488 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.696820 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.711606 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prw4c"] Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.746778 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-catalog-content\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.746825 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjq7\" (UniqueName: \"kubernetes.io/projected/4cc54252-cfdf-4b71-bfa5-552dcd26500d-kube-api-access-4bjq7\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.747018 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-utilities\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.847855 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-catalog-content\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.847901 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjq7\" (UniqueName: \"kubernetes.io/projected/4cc54252-cfdf-4b71-bfa5-552dcd26500d-kube-api-access-4bjq7\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.847955 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-utilities\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.848481 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-utilities\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.848889 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cc54252-cfdf-4b71-bfa5-552dcd26500d-catalog-content\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:42 crc kubenswrapper[4873]: I0219 09:50:42.866994 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjq7\" (UniqueName: \"kubernetes.io/projected/4cc54252-cfdf-4b71-bfa5-552dcd26500d-kube-api-access-4bjq7\") pod \"redhat-operators-prw4c\" (UID: \"4cc54252-cfdf-4b71-bfa5-552dcd26500d\") " pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:43 crc kubenswrapper[4873]: I0219 09:50:43.049807 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:43 crc kubenswrapper[4873]: I0219 09:50:43.247319 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zk9wc" event={"ID":"5f466b31-21ca-4f19-9b73-72cfb7c68d55","Type":"ContainerStarted","Data":"c877ae60364e1ca60cc0eb5b572c2d10ada45db34b84d724f8be386b520c7fe2"} Feb 19 09:50:43 crc kubenswrapper[4873]: I0219 09:50:43.272333 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zk9wc" podStartSLOduration=1.8635431709999999 podStartE2EDuration="3.272308565s" podCreationTimestamp="2026-02-19 09:50:40 +0000 UTC" firstStartedPulling="2026-02-19 09:50:41.233298196 +0000 UTC m=+350.522729844" lastFinishedPulling="2026-02-19 09:50:42.64206357 +0000 UTC m=+351.931495238" observedRunningTime="2026-02-19 09:50:43.26932816 +0000 UTC m=+352.558759838" watchObservedRunningTime="2026-02-19 09:50:43.272308565 +0000 UTC m=+352.561740213" Feb 19 09:50:43 crc kubenswrapper[4873]: I0219 09:50:43.508799 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prw4c"] Feb 19 09:50:44 crc kubenswrapper[4873]: I0219 09:50:44.253383 4873 generic.go:334] "Generic (PLEG): container finished" podID="4cc54252-cfdf-4b71-bfa5-552dcd26500d" containerID="f3f57712b2360a2b482c6d51abd9045fd079835e664170ddd4c3c59343cbfa4f" exitCode=0 Feb 19 09:50:44 crc kubenswrapper[4873]: I0219 09:50:44.253496 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prw4c" event={"ID":"4cc54252-cfdf-4b71-bfa5-552dcd26500d","Type":"ContainerDied","Data":"f3f57712b2360a2b482c6d51abd9045fd079835e664170ddd4c3c59343cbfa4f"} Feb 19 09:50:44 crc kubenswrapper[4873]: I0219 09:50:44.254028 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prw4c" event={"ID":"4cc54252-cfdf-4b71-bfa5-552dcd26500d","Type":"ContainerStarted","Data":"196214af4696ef64a53a09ceae50492efa721dd9677dfaa5fb81d9035acae728"} Feb 19 09:50:46 crc kubenswrapper[4873]: I0219 09:50:46.268311 4873 generic.go:334] "Generic (PLEG): container finished" podID="4cc54252-cfdf-4b71-bfa5-552dcd26500d" containerID="f03d133f183eef9c098e95f93377b7ee5f8e0c45d8583135279394f2b3e5426e" exitCode=0 Feb 19 09:50:46 crc kubenswrapper[4873]: I0219 09:50:46.268404 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prw4c" event={"ID":"4cc54252-cfdf-4b71-bfa5-552dcd26500d","Type":"ContainerDied","Data":"f03d133f183eef9c098e95f93377b7ee5f8e0c45d8583135279394f2b3e5426e"} Feb 19 09:50:47 crc kubenswrapper[4873]: I0219 09:50:47.278953 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prw4c" event={"ID":"4cc54252-cfdf-4b71-bfa5-552dcd26500d","Type":"ContainerStarted","Data":"482af24b132b8069e78985473f493d01541d4588becf63b36629d98342cbc8cb"} Feb 19 09:50:48 crc kubenswrapper[4873]: I0219 09:50:48.240694 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:50:48 crc kubenswrapper[4873]: I0219 09:50:48.240756 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:50:50 crc kubenswrapper[4873]: I0219 09:50:50.608335 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:50 crc kubenswrapper[4873]: I0219 09:50:50.609882 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:50 crc kubenswrapper[4873]: I0219 09:50:50.658152 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:50 crc kubenswrapper[4873]: I0219 09:50:50.678133 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-prw4c" podStartSLOduration=6.29906777 podStartE2EDuration="8.678113724s" podCreationTimestamp="2026-02-19 09:50:42 +0000 UTC" firstStartedPulling="2026-02-19 09:50:44.255003167 +0000 UTC m=+353.544434805" lastFinishedPulling="2026-02-19 09:50:46.634049121 +0000 UTC m=+355.923480759" observedRunningTime="2026-02-19 09:50:47.300705083 +0000 UTC m=+356.590136771" watchObservedRunningTime="2026-02-19 09:50:50.678113724 +0000 UTC m=+359.967545372" Feb 19 09:50:51 crc kubenswrapper[4873]: I0219 09:50:51.348048 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zk9wc" Feb 19 09:50:53 crc kubenswrapper[4873]: I0219 09:50:53.050332 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:53 crc kubenswrapper[4873]: I0219 09:50:53.050383 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:50:54 crc kubenswrapper[4873]: I0219 09:50:54.124895 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-prw4c" podUID="4cc54252-cfdf-4b71-bfa5-552dcd26500d" containerName="registry-server" probeResult="failure" output=< Feb 19 09:50:54 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 09:50:54 crc kubenswrapper[4873]: > Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.650384 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkkzf"] Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.651505 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.672740 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkkzf"] Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747506 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-tls\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747555 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4f04829-f740-4d17-9358-f59fa6561eaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747592 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbb6c\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-kube-api-access-sbb6c\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747616 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-trusted-ca\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747711 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-certificates\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-bound-sa-token\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.747758 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4f04829-f740-4d17-9358-f59fa6561eaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.772817 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.849625 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbb6c\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-kube-api-access-sbb6c\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.849744 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-trusted-ca\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.850284 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-certificates\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.850413 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-bound-sa-token\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.850544 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4f04829-f740-4d17-9358-f59fa6561eaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.850626 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-tls\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.850687 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4f04829-f740-4d17-9358-f59fa6561eaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.851574 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-certificates\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.851777 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f4f04829-f740-4d17-9358-f59fa6561eaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.852934 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4f04829-f740-4d17-9358-f59fa6561eaa-trusted-ca\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.858217 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-registry-tls\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.861166 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f4f04829-f740-4d17-9358-f59fa6561eaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.871538 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-bound-sa-token\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.872348 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbb6c\" (UniqueName: \"kubernetes.io/projected/f4f04829-f740-4d17-9358-f59fa6561eaa-kube-api-access-sbb6c\") pod \"image-registry-66df7c8f76-gkkzf\" (UID: \"f4f04829-f740-4d17-9358-f59fa6561eaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:56 crc kubenswrapper[4873]: I0219 09:50:56.970923 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:57 crc kubenswrapper[4873]: I0219 09:50:57.522707 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gkkzf"] Feb 19 09:50:57 crc kubenswrapper[4873]: W0219 09:50:57.526529 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f04829_f740_4d17_9358_f59fa6561eaa.slice/crio-1304e88140e14bf1f928899d3525033eebe6aee5fdb24b96819955c54c64d700 WatchSource:0}: Error finding container 1304e88140e14bf1f928899d3525033eebe6aee5fdb24b96819955c54c64d700: Status 404 returned error can't find the container with id 1304e88140e14bf1f928899d3525033eebe6aee5fdb24b96819955c54c64d700 Feb 19 09:50:58 crc kubenswrapper[4873]: I0219 09:50:58.339666 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" event={"ID":"f4f04829-f740-4d17-9358-f59fa6561eaa","Type":"ContainerStarted","Data":"2ac1c42242b389bf16129558a39272bbf47249f38b3e44908f6b6cf7ca19450e"} Feb 19 09:50:58 crc kubenswrapper[4873]: I0219 09:50:58.340018 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:50:58 crc kubenswrapper[4873]: I0219 09:50:58.340036 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" event={"ID":"f4f04829-f740-4d17-9358-f59fa6561eaa","Type":"ContainerStarted","Data":"1304e88140e14bf1f928899d3525033eebe6aee5fdb24b96819955c54c64d700"} Feb 19 09:50:58 crc kubenswrapper[4873]: I0219 09:50:58.360162 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" podStartSLOduration=2.360146501 podStartE2EDuration="2.360146501s" podCreationTimestamp="2026-02-19 09:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:50:58.359431713 +0000 UTC m=+367.648863371" watchObservedRunningTime="2026-02-19 09:50:58.360146501 +0000 UTC m=+367.649578139" Feb 19 09:51:03 crc kubenswrapper[4873]: I0219 09:51:03.119870 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:51:03 crc kubenswrapper[4873]: I0219 09:51:03.166373 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-prw4c" Feb 19 09:51:16 crc kubenswrapper[4873]: I0219 09:51:16.978307 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gkkzf" Feb 19 09:51:17 crc kubenswrapper[4873]: I0219 09:51:17.048992 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:51:18 crc kubenswrapper[4873]: I0219 09:51:18.240758 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:51:18 crc kubenswrapper[4873]: I0219 09:51:18.241213 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:51:22 crc kubenswrapper[4873]: I0219 09:51:22.748365 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:51:22 crc kubenswrapper[4873]: I0219 09:51:22.748878 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" podUID="3db2587a-f66b-4e3e-855f-9973e9b28743" containerName="route-controller-manager" containerID="cri-o://bc6dcfd23752a86e6e1bb6f7dd9d0bc1aa50316ea10ce09d6252d4ba5b4e6b9f" gracePeriod=30 Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.491920 4873 generic.go:334] "Generic (PLEG): container finished" podID="3db2587a-f66b-4e3e-855f-9973e9b28743" containerID="bc6dcfd23752a86e6e1bb6f7dd9d0bc1aa50316ea10ce09d6252d4ba5b4e6b9f" exitCode=0 Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.492015 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" event={"ID":"3db2587a-f66b-4e3e-855f-9973e9b28743","Type":"ContainerDied","Data":"bc6dcfd23752a86e6e1bb6f7dd9d0bc1aa50316ea10ce09d6252d4ba5b4e6b9f"} Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.715938 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.814222 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kzg9\" (UniqueName: \"kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9\") pod \"3db2587a-f66b-4e3e-855f-9973e9b28743\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.814304 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca\") pod \"3db2587a-f66b-4e3e-855f-9973e9b28743\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.814340 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert\") pod \"3db2587a-f66b-4e3e-855f-9973e9b28743\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.814372 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config\") pod \"3db2587a-f66b-4e3e-855f-9973e9b28743\" (UID: \"3db2587a-f66b-4e3e-855f-9973e9b28743\") " Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.815765 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca" (OuterVolumeSpecName: "client-ca") pod "3db2587a-f66b-4e3e-855f-9973e9b28743" (UID: "3db2587a-f66b-4e3e-855f-9973e9b28743"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.816324 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config" (OuterVolumeSpecName: "config") pod "3db2587a-f66b-4e3e-855f-9973e9b28743" (UID: "3db2587a-f66b-4e3e-855f-9973e9b28743"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.828037 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3db2587a-f66b-4e3e-855f-9973e9b28743" (UID: "3db2587a-f66b-4e3e-855f-9973e9b28743"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.828059 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9" (OuterVolumeSpecName: "kube-api-access-2kzg9") pod "3db2587a-f66b-4e3e-855f-9973e9b28743" (UID: "3db2587a-f66b-4e3e-855f-9973e9b28743"). InnerVolumeSpecName "kube-api-access-2kzg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.915709 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kzg9\" (UniqueName: \"kubernetes.io/projected/3db2587a-f66b-4e3e-855f-9973e9b28743-kube-api-access-2kzg9\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.915753 4873 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.915768 4873 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3db2587a-f66b-4e3e-855f-9973e9b28743-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:23 crc kubenswrapper[4873]: I0219 09:51:23.915782 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db2587a-f66b-4e3e-855f-9973e9b28743-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.119540 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq"] Feb 19 09:51:24 crc kubenswrapper[4873]: E0219 09:51:24.119931 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db2587a-f66b-4e3e-855f-9973e9b28743" containerName="route-controller-manager" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.119964 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db2587a-f66b-4e3e-855f-9973e9b28743" containerName="route-controller-manager" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.120187 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db2587a-f66b-4e3e-855f-9973e9b28743" containerName="route-controller-manager" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.120844 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.148718 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq"] Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.220336 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprpl\" (UniqueName: \"kubernetes.io/projected/26cf91f4-9f21-487d-8ee9-23700f39e900-kube-api-access-fprpl\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.220440 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-client-ca\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.220464 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-config\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.220545 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26cf91f4-9f21-487d-8ee9-23700f39e900-serving-cert\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.322287 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26cf91f4-9f21-487d-8ee9-23700f39e900-serving-cert\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.322369 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fprpl\" (UniqueName: \"kubernetes.io/projected/26cf91f4-9f21-487d-8ee9-23700f39e900-kube-api-access-fprpl\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.322417 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-client-ca\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.322442 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-config\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.324135 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-client-ca\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.324186 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26cf91f4-9f21-487d-8ee9-23700f39e900-config\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.329568 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26cf91f4-9f21-487d-8ee9-23700f39e900-serving-cert\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.343862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fprpl\" (UniqueName: \"kubernetes.io/projected/26cf91f4-9f21-487d-8ee9-23700f39e900-kube-api-access-fprpl\") pod \"route-controller-manager-595695d48d-g88dq\" (UID: \"26cf91f4-9f21-487d-8ee9-23700f39e900\") " pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.447288 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.499765 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" event={"ID":"3db2587a-f66b-4e3e-855f-9973e9b28743","Type":"ContainerDied","Data":"456ae351a251d151fea49e6f19e6eb9dec882c42d7b5599fb86ab622c2053df9"} Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.499814 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.499844 4873 scope.go:117] "RemoveContainer" containerID="bc6dcfd23752a86e6e1bb6f7dd9d0bc1aa50316ea10ce09d6252d4ba5b4e6b9f" Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.528090 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.531184 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-cmw9x"] Feb 19 09:51:24 crc kubenswrapper[4873]: I0219 09:51:24.662233 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq"] Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.496027 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db2587a-f66b-4e3e-855f-9973e9b28743" path="/var/lib/kubelet/pods/3db2587a-f66b-4e3e-855f-9973e9b28743/volumes" Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.507809 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" event={"ID":"26cf91f4-9f21-487d-8ee9-23700f39e900","Type":"ContainerStarted","Data":"58fec737d644d6a07362ed517738619806fbbf08b24f9d937d2d1a85a0742dd8"} Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.507874 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" event={"ID":"26cf91f4-9f21-487d-8ee9-23700f39e900","Type":"ContainerStarted","Data":"8b7eb58a34f6f670f3aaa5180f495d0f1ad02db6e966a32f0a45dbd24399f695"} Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.509567 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.514055 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" Feb 19 09:51:25 crc kubenswrapper[4873]: I0219 09:51:25.536776 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-595695d48d-g88dq" podStartSLOduration=3.536755834 podStartE2EDuration="3.536755834s" podCreationTimestamp="2026-02-19 09:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:51:25.534973099 +0000 UTC m=+394.824404767" watchObservedRunningTime="2026-02-19 09:51:25.536755834 +0000 UTC m=+394.826187482" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.095851 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" podUID="2948a5a7-4d94-4314-acdf-489dd93609b9" containerName="registry" containerID="cri-o://1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc" gracePeriod=30 Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.479730 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606420 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606477 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606519 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606556 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606610 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606660 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7wm6\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606686 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.606714 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets\") pod \"2948a5a7-4d94-4314-acdf-489dd93609b9\" (UID: \"2948a5a7-4d94-4314-acdf-489dd93609b9\") " Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.607890 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.608642 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.613289 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6" (OuterVolumeSpecName: "kube-api-access-q7wm6") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "kube-api-access-q7wm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.614064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.614347 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.617479 4873 generic.go:334] "Generic (PLEG): container finished" podID="2948a5a7-4d94-4314-acdf-489dd93609b9" containerID="1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc" exitCode=0 Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.617525 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" event={"ID":"2948a5a7-4d94-4314-acdf-489dd93609b9","Type":"ContainerDied","Data":"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc"} Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.617553 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" event={"ID":"2948a5a7-4d94-4314-acdf-489dd93609b9","Type":"ContainerDied","Data":"9186481593e0db9c07ae375e1f7f148954394edd55d25a1feea71b13835f9c08"} Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.617578 4873 scope.go:117] "RemoveContainer" containerID="1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.617695 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7hhjq" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.622497 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.625125 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.625459 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2948a5a7-4d94-4314-acdf-489dd93609b9" (UID: "2948a5a7-4d94-4314-acdf-489dd93609b9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.664909 4873 scope.go:117] "RemoveContainer" containerID="1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc" Feb 19 09:51:42 crc kubenswrapper[4873]: E0219 09:51:42.665571 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc\": container with ID starting with 1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc not found: ID does not exist" containerID="1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.665627 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc"} err="failed to get container status \"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc\": rpc error: code = NotFound desc = could not find container \"1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc\": container with ID starting with 1996e733635906962b7b2e8b3762e89eeadec10f1b534e11c0d90dd0767471bc not found: ID does not exist" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708148 4873 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2948a5a7-4d94-4314-acdf-489dd93609b9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708193 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7wm6\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-kube-api-access-q7wm6\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708208 4873 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708220 4873 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2948a5a7-4d94-4314-acdf-489dd93609b9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708231 4873 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708242 4873 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2948a5a7-4d94-4314-acdf-489dd93609b9-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.708252 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2948a5a7-4d94-4314-acdf-489dd93609b9-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.949279 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:51:42 crc kubenswrapper[4873]: I0219 09:51:42.956370 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7hhjq"] Feb 19 09:51:43 crc kubenswrapper[4873]: I0219 09:51:43.491658 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2948a5a7-4d94-4314-acdf-489dd93609b9" path="/var/lib/kubelet/pods/2948a5a7-4d94-4314-acdf-489dd93609b9/volumes" Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.240373 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.240818 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.240913 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.242873 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.243232 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a" gracePeriod=600 Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.669241 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a" exitCode=0 Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.669298 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a"} Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.669605 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552"} Feb 19 09:51:48 crc kubenswrapper[4873]: I0219 09:51:48.669630 4873 scope.go:117] "RemoveContainer" containerID="9afd159bda3d2cbd676930ddf4df8cf39b5da5575d7c4d647ae91446f1b76837" Feb 19 09:53:48 crc kubenswrapper[4873]: I0219 09:53:48.240554 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:53:48 crc kubenswrapper[4873]: I0219 09:53:48.241210 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.193048 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv"] Feb 19 09:54:12 crc kubenswrapper[4873]: E0219 09:54:12.193751 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2948a5a7-4d94-4314-acdf-489dd93609b9" containerName="registry" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.193766 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2948a5a7-4d94-4314-acdf-489dd93609b9" containerName="registry" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.193867 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2948a5a7-4d94-4314-acdf-489dd93609b9" containerName="registry" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.194210 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.196817 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.197329 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9xdhb" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.198513 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.203867 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ckd42"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.204485 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ckd42" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.206139 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-j6cgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.211440 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.227579 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fhd9c"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.228254 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.229993 4873 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kzvgl" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.240610 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ckd42"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.244087 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fhd9c"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.250748 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pds7j\" (UniqueName: \"kubernetes.io/projected/084c90b4-3270-4f64-8c8c-1a96f05dc1fa-kube-api-access-pds7j\") pod \"cert-manager-cainjector-cf98fcc89-zhqgv\" (UID: \"084c90b4-3270-4f64-8c8c-1a96f05dc1fa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.250838 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nzs\" (UniqueName: \"kubernetes.io/projected/51fc361b-11a5-480a-a5b9-0eb4b7670e83-kube-api-access-z7nzs\") pod \"cert-manager-858654f9db-ckd42\" (UID: \"51fc361b-11a5-480a-a5b9-0eb4b7670e83\") " pod="cert-manager/cert-manager-858654f9db-ckd42" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.351606 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pds7j\" (UniqueName: \"kubernetes.io/projected/084c90b4-3270-4f64-8c8c-1a96f05dc1fa-kube-api-access-pds7j\") pod \"cert-manager-cainjector-cf98fcc89-zhqgv\" (UID: \"084c90b4-3270-4f64-8c8c-1a96f05dc1fa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.351687 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nzs\" (UniqueName: \"kubernetes.io/projected/51fc361b-11a5-480a-a5b9-0eb4b7670e83-kube-api-access-z7nzs\") pod \"cert-manager-858654f9db-ckd42\" (UID: \"51fc361b-11a5-480a-a5b9-0eb4b7670e83\") " pod="cert-manager/cert-manager-858654f9db-ckd42" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.351720 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n5z9\" (UniqueName: \"kubernetes.io/projected/2eebe311-368b-45b4-9e74-7442221e3785-kube-api-access-2n5z9\") pod \"cert-manager-webhook-687f57d79b-fhd9c\" (UID: \"2eebe311-368b-45b4-9e74-7442221e3785\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.376645 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nzs\" (UniqueName: \"kubernetes.io/projected/51fc361b-11a5-480a-a5b9-0eb4b7670e83-kube-api-access-z7nzs\") pod \"cert-manager-858654f9db-ckd42\" (UID: \"51fc361b-11a5-480a-a5b9-0eb4b7670e83\") " pod="cert-manager/cert-manager-858654f9db-ckd42" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.380831 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pds7j\" (UniqueName: \"kubernetes.io/projected/084c90b4-3270-4f64-8c8c-1a96f05dc1fa-kube-api-access-pds7j\") pod \"cert-manager-cainjector-cf98fcc89-zhqgv\" (UID: \"084c90b4-3270-4f64-8c8c-1a96f05dc1fa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.452536 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n5z9\" (UniqueName: \"kubernetes.io/projected/2eebe311-368b-45b4-9e74-7442221e3785-kube-api-access-2n5z9\") pod \"cert-manager-webhook-687f57d79b-fhd9c\" (UID: \"2eebe311-368b-45b4-9e74-7442221e3785\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.474190 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n5z9\" (UniqueName: \"kubernetes.io/projected/2eebe311-368b-45b4-9e74-7442221e3785-kube-api-access-2n5z9\") pod \"cert-manager-webhook-687f57d79b-fhd9c\" (UID: \"2eebe311-368b-45b4-9e74-7442221e3785\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.520191 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.524752 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ckd42" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.540426 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.773479 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv"] Feb 19 09:54:12 crc kubenswrapper[4873]: I0219 09:54:12.781705 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:54:13 crc kubenswrapper[4873]: I0219 09:54:13.035271 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fhd9c"] Feb 19 09:54:13 crc kubenswrapper[4873]: W0219 09:54:13.037169 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51fc361b_11a5_480a_a5b9_0eb4b7670e83.slice/crio-3fcf653b376a5d232c21eb305f3bae7952639ec416199326fc9d2e887fc4cd51 WatchSource:0}: Error finding container 3fcf653b376a5d232c21eb305f3bae7952639ec416199326fc9d2e887fc4cd51: Status 404 returned error can't find the container with id 3fcf653b376a5d232c21eb305f3bae7952639ec416199326fc9d2e887fc4cd51 Feb 19 09:54:13 crc kubenswrapper[4873]: I0219 09:54:13.040896 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ckd42"] Feb 19 09:54:13 crc kubenswrapper[4873]: I0219 09:54:13.497304 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ckd42" event={"ID":"51fc361b-11a5-480a-a5b9-0eb4b7670e83","Type":"ContainerStarted","Data":"3fcf653b376a5d232c21eb305f3bae7952639ec416199326fc9d2e887fc4cd51"} Feb 19 09:54:13 crc kubenswrapper[4873]: I0219 09:54:13.497378 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" event={"ID":"2eebe311-368b-45b4-9e74-7442221e3785","Type":"ContainerStarted","Data":"fe3e858fa62b4f2a35aee25b4cfc4b7ad57f92c3e5819cec30c79d9d77529f37"} Feb 19 09:54:13 crc kubenswrapper[4873]: I0219 09:54:13.497401 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" event={"ID":"084c90b4-3270-4f64-8c8c-1a96f05dc1fa","Type":"ContainerStarted","Data":"f6699ab394ac1e0d1433a35c173c4d6f6d549698ab2e4369c59d6ac16377af59"} Feb 19 09:54:15 crc kubenswrapper[4873]: I0219 09:54:15.511641 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" event={"ID":"084c90b4-3270-4f64-8c8c-1a96f05dc1fa","Type":"ContainerStarted","Data":"478a0b97c98403eaaf3117b0eaf5a8fc83acb96c391a0b2999b1a23922abda02"} Feb 19 09:54:15 crc kubenswrapper[4873]: I0219 09:54:15.528521 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-zhqgv" podStartSLOduration=1.199860442 podStartE2EDuration="3.528492785s" podCreationTimestamp="2026-02-19 09:54:12 +0000 UTC" firstStartedPulling="2026-02-19 09:54:12.781417812 +0000 UTC m=+562.070849460" lastFinishedPulling="2026-02-19 09:54:15.110050165 +0000 UTC m=+564.399481803" observedRunningTime="2026-02-19 09:54:15.525276567 +0000 UTC m=+564.814708215" watchObservedRunningTime="2026-02-19 09:54:15.528492785 +0000 UTC m=+564.817924423" Feb 19 09:54:17 crc kubenswrapper[4873]: I0219 09:54:17.537787 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ckd42" event={"ID":"51fc361b-11a5-480a-a5b9-0eb4b7670e83","Type":"ContainerStarted","Data":"37903da37b7892f7f7f358707156238c37595ef3e70f73adcd90a7236e105f34"} Feb 19 09:54:17 crc kubenswrapper[4873]: I0219 09:54:17.540875 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" event={"ID":"2eebe311-368b-45b4-9e74-7442221e3785","Type":"ContainerStarted","Data":"3feba744fc116718f3cbb1c1da4b7e8c8c044a5f2e20e19597dc969073b648a9"} Feb 19 09:54:17 crc kubenswrapper[4873]: I0219 09:54:17.541187 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:17 crc kubenswrapper[4873]: I0219 09:54:17.557243 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ckd42" podStartSLOduration=1.734597703 podStartE2EDuration="5.557226786s" podCreationTimestamp="2026-02-19 09:54:12 +0000 UTC" firstStartedPulling="2026-02-19 09:54:13.039307599 +0000 UTC m=+562.328739237" lastFinishedPulling="2026-02-19 09:54:16.861936672 +0000 UTC m=+566.151368320" observedRunningTime="2026-02-19 09:54:17.556617321 +0000 UTC m=+566.846048959" watchObservedRunningTime="2026-02-19 09:54:17.557226786 +0000 UTC m=+566.846658424" Feb 19 09:54:17 crc kubenswrapper[4873]: I0219 09:54:17.579006 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" podStartSLOduration=1.810077937 podStartE2EDuration="5.578991272s" podCreationTimestamp="2026-02-19 09:54:12 +0000 UTC" firstStartedPulling="2026-02-19 09:54:13.033867787 +0000 UTC m=+562.323299425" lastFinishedPulling="2026-02-19 09:54:16.802781122 +0000 UTC m=+566.092212760" observedRunningTime="2026-02-19 09:54:17.576936662 +0000 UTC m=+566.866368320" watchObservedRunningTime="2026-02-19 09:54:17.578991272 +0000 UTC m=+566.868422900" Feb 19 09:54:18 crc kubenswrapper[4873]: I0219 09:54:18.240963 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:54:18 crc kubenswrapper[4873]: I0219 09:54:18.241596 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.138866 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j94bh"] Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.139789 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.139845 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="northd" containerID="cri-o://fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.139832 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="nbdb" containerID="cri-o://cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.139951 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-node" containerID="cri-o://531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.140015 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-acl-logging" containerID="cri-o://818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.140059 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-controller" containerID="cri-o://ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.140068 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="sbdb" containerID="cri-o://02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.197762 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" containerID="cri-o://fbe398acea08ecbb128c7f23474abd3c929b29591afd83ce34befc3c628c7ddb" gracePeriod=30 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.543828 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-fhd9c" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.577953 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovnkube-controller/3.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.583871 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-acl-logging/0.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.584539 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-controller/0.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585036 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="fbe398acea08ecbb128c7f23474abd3c929b29591afd83ce34befc3c628c7ddb" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585075 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585084 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585095 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585130 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585127 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"fbe398acea08ecbb128c7f23474abd3c929b29591afd83ce34befc3c628c7ddb"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585172 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585186 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585199 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585140 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe" exitCode=0 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585211 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585218 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4" exitCode=143 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585229 4873 generic.go:334] "Generic (PLEG): container finished" podID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerID="ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd" exitCode=143 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585399 4873 scope.go:117] "RemoveContainer" containerID="e579ee790b5fefd1d5bf854d00160acec2752733df35041a4f9ec15c5d947308" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585559 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.585581 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.588006 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/2.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.588660 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/1.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.588711 4873 generic.go:334] "Generic (PLEG): container finished" podID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" containerID="ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2" exitCode=2 Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.588740 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerDied","Data":"ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2"} Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.589249 4873 scope.go:117] "RemoveContainer" containerID="ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.589504 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4pk8x_openshift-multus(e1ae3d8d-27cf-489f-a6ba-ef914db74bff)\"" pod="openshift-multus/multus-4pk8x" podUID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.627125 4873 scope.go:117] "RemoveContainer" containerID="81ec7da29e3b03fb97f0d183d69bb256ed8f7340ca5df7e0c44bedd129b968cc" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.900586 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-acl-logging/0.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.901461 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-controller/0.log" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.903284 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.974707 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z4jgv"] Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.974927 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-acl-logging" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.974943 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-acl-logging" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.974953 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.974961 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.974973 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="sbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.974981 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="sbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.974989 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="nbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.974997 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="nbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975006 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975013 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975022 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kubecfg-setup" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975030 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kubecfg-setup" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975041 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975047 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975056 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975062 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975071 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975078 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975090 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-node" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975096 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-node" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975208 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975219 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975231 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="northd" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975240 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="northd" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975344 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-node" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975355 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="nbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975363 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975371 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="northd" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975382 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975392 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975399 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975410 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975419 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="sbdb" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975430 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovn-acl-logging" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975441 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975450 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: E0219 09:54:22.975557 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.975567 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" containerName="ovnkube-controller" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.977122 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998502 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998552 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998585 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998624 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998629 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998655 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998761 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998785 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998819 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998858 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998917 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998948 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.998980 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999013 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999038 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999065 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999094 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7vl\" (UniqueName: \"kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999138 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999159 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999184 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch\") pod \"a7760a15-9ea0-42f0-b42b-72de30071d14\" (UID: \"a7760a15-9ea0-42f0-b42b-72de30071d14\") " Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999345 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-systemd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999380 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-kubelet\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999424 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-bin\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999445 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-env-overrides\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999472 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-node-log\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999493 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999516 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-systemd-units\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999535 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999582 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-etc-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999611 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-slash\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-netd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999666 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-netns\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999696 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-script-lib\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999728 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrbxf\" (UniqueName: \"kubernetes.io/projected/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-kube-api-access-nrbxf\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999753 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999782 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-var-lib-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999812 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-ovn\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999851 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovn-node-metrics-cert\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999886 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-log-socket\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:22 crc kubenswrapper[4873]: I0219 09:54:22.999927 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-config\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:22.999987 4873 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000024 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000087 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000159 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000238 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000255 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash" (OuterVolumeSpecName: "host-slash") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000301 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000304 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000315 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000312 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000325 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log" (OuterVolumeSpecName: "node-log") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000346 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000380 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000390 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000392 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket" (OuterVolumeSpecName: "log-socket") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000683 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.000711 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.005486 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl" (OuterVolumeSpecName: "kube-api-access-vz7vl") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "kube-api-access-vz7vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.006984 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.029240 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a7760a15-9ea0-42f0-b42b-72de30071d14" (UID: "a7760a15-9ea0-42f0-b42b-72de30071d14"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-systemd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100878 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-kubelet\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100900 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-bin\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100915 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-env-overrides\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100931 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-node-log\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100946 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100963 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-systemd-units\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.100978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101001 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-etc-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101018 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-slash\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101035 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-netd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101050 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-netns\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101066 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-script-lib\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101083 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrbxf\" (UniqueName: \"kubernetes.io/projected/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-kube-api-access-nrbxf\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101095 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101130 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-var-lib-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101153 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-ovn\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101180 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovn-node-metrics-cert\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101200 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-log-socket\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101227 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-config\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101271 4873 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101282 4873 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101291 4873 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101301 4873 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101309 4873 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101317 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101325 4873 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101334 4873 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101342 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7vl\" (UniqueName: \"kubernetes.io/projected/a7760a15-9ea0-42f0-b42b-72de30071d14-kube-api-access-vz7vl\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101350 4873 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101360 4873 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101368 4873 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101377 4873 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101384 4873 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101392 4873 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101400 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a7760a15-9ea0-42f0-b42b-72de30071d14-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101409 4873 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a7760a15-9ea0-42f0-b42b-72de30071d14-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101433 4873 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.101441 4873 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a7760a15-9ea0-42f0-b42b-72de30071d14-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102154 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-config\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102196 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-systemd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102219 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-kubelet\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102239 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-bin\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102550 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-env-overrides\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102582 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-node-log\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102604 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102624 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-systemd-units\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102642 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102661 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-etc-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102681 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-slash\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102706 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-cni-netd\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102728 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-netns\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.103013 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-log-socket\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.102978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-var-lib-openvswitch\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.103023 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-host-run-ovn-kubernetes\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.103068 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-run-ovn\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.103157 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovnkube-script-lib\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.106264 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-ovn-node-metrics-cert\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.120739 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrbxf\" (UniqueName: \"kubernetes.io/projected/17ff5bfa-ab64-4787-83f7-a1c0f76e0e52-kube-api-access-nrbxf\") pod \"ovnkube-node-z4jgv\" (UID: \"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52\") " pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.290842 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.599553 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-acl-logging/0.log" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.600823 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j94bh_a7760a15-9ea0-42f0-b42b-72de30071d14/ovn-controller/0.log" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.601407 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" event={"ID":"a7760a15-9ea0-42f0-b42b-72de30071d14","Type":"ContainerDied","Data":"542002b7bbc20e4e4f7ed68e13539e1b5d49a0679ef11d6b86cc15c762bc318b"} Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.601462 4873 scope.go:117] "RemoveContainer" containerID="fbe398acea08ecbb128c7f23474abd3c929b29591afd83ce34befc3c628c7ddb" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.601598 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j94bh" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.604801 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/2.log" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.608266 4873 generic.go:334] "Generic (PLEG): container finished" podID="17ff5bfa-ab64-4787-83f7-a1c0f76e0e52" containerID="df2fccfc9c89e51ac4652b545fbabaef56297c048277b07a756768a2b0ee26f0" exitCode=0 Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.608331 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerDied","Data":"df2fccfc9c89e51ac4652b545fbabaef56297c048277b07a756768a2b0ee26f0"} Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.608381 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"73206db7004dbb28fd56f4c709771c027dca64cc0dcf37bbaad7dab94fa42938"} Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.630523 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j94bh"] Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.631003 4873 scope.go:117] "RemoveContainer" containerID="02bb2bc90cc5fcd65d83ff7114e7523b46ae5d732b1201b2e60f7514ee3bb578" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.636072 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j94bh"] Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.660596 4873 scope.go:117] "RemoveContainer" containerID="cea4929f2a8788170b1d6989bdf1e48c5bd784c7bbb9b61b05c097142900d3a6" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.674196 4873 scope.go:117] "RemoveContainer" containerID="fff94e9f6ecb5d589a39fbb6470fba09f150e5017921cd07183dcf45906b6bd2" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.697890 4873 scope.go:117] "RemoveContainer" containerID="c3864b7b7b0262329654b4163ef0f78770d072126fe647477af8481dcc7dbfa3" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.717632 4873 scope.go:117] "RemoveContainer" containerID="531c5f175091712c62d61b9bd246031a1be5625e2035eb74646b9a4a8ebbacfe" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.744303 4873 scope.go:117] "RemoveContainer" containerID="818daed009518f5262f167ecc83ecfbebaed5e714be5457d6c1dbb51d83ed9d4" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.759470 4873 scope.go:117] "RemoveContainer" containerID="ca55e531374d74576c0356953177e827ea0a40d99f6d868d58da80ac67053dbd" Feb 19 09:54:23 crc kubenswrapper[4873]: I0219 09:54:23.775194 4873 scope.go:117] "RemoveContainer" containerID="d6f649d2d74dd0c4285fa3b98974ee330db985e62e9fff24b68e7a028045427b" Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622061 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"b37cdcbfc01849c4081c33074d91a9bdb9d7910165482ec52d5046f02416ab2e"} Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622415 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"90876eee50f60380d3803482056291ca7cbb66d4a94bd642bd4badfbdc28003a"} Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622428 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"8d6930243d79124ba278babd506e04b2816a774458ee69a11165bcea2afc3b95"} Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"7ba7a2fc66e426dad5d3e319ad518d6b99e8a5cb9bfb0c6e6909886ab6142f58"} Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622445 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"f16467f0dcec222b18b211fd35db0821cd93682069022e8e026753865e7ae207"} Feb 19 09:54:24 crc kubenswrapper[4873]: I0219 09:54:24.622453 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"753dd2222e3bbd0c8433663f5fc6838c82629fe0b2d6214734f2693b625cc40d"} Feb 19 09:54:25 crc kubenswrapper[4873]: I0219 09:54:25.491638 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7760a15-9ea0-42f0-b42b-72de30071d14" path="/var/lib/kubelet/pods/a7760a15-9ea0-42f0-b42b-72de30071d14/volumes" Feb 19 09:54:26 crc kubenswrapper[4873]: I0219 09:54:26.641927 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"e430a338971dfc01c5ac6b0ee1465873f07d954a8715391aec85b6cf9139df0a"} Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.664732 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" event={"ID":"17ff5bfa-ab64-4787-83f7-a1c0f76e0e52","Type":"ContainerStarted","Data":"f69bf6caa1dcadd82d7ed2e8a149250996041d06b3becc88823f02533f26d14d"} Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.665127 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.665315 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.665326 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.701436 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.708031 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" podStartSLOduration=7.708013731 podStartE2EDuration="7.708013731s" podCreationTimestamp="2026-02-19 09:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:54:29.704323242 +0000 UTC m=+578.993754880" watchObservedRunningTime="2026-02-19 09:54:29.708013731 +0000 UTC m=+578.997445379" Feb 19 09:54:29 crc kubenswrapper[4873]: I0219 09:54:29.708812 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:34 crc kubenswrapper[4873]: I0219 09:54:34.484858 4873 scope.go:117] "RemoveContainer" containerID="ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2" Feb 19 09:54:34 crc kubenswrapper[4873]: E0219 09:54:34.485629 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4pk8x_openshift-multus(e1ae3d8d-27cf-489f-a6ba-ef914db74bff)\"" pod="openshift-multus/multus-4pk8x" podUID="e1ae3d8d-27cf-489f-a6ba-ef914db74bff" Feb 19 09:54:47 crc kubenswrapper[4873]: I0219 09:54:47.484639 4873 scope.go:117] "RemoveContainer" containerID="ef1d74ca48faafc4bbde6d98d0cbea070a166074ced1ae06003180d6fd64ebb2" Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.209492 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4pk8x_e1ae3d8d-27cf-489f-a6ba-ef914db74bff/kube-multus/2.log" Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.209737 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4pk8x" event={"ID":"e1ae3d8d-27cf-489f-a6ba-ef914db74bff","Type":"ContainerStarted","Data":"6718dd929284093b0608531f70803abc21ce790d2867e131107c90a949a950c0"} Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.240549 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.240602 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.240641 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.241170 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:54:48 crc kubenswrapper[4873]: I0219 09:54:48.241225 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552" gracePeriod=600 Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.217865 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552" exitCode=0 Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.217940 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552"} Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.218165 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663"} Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.218185 4873 scope.go:117] "RemoveContainer" containerID="5ecc9e74f65542c5ba1361ec123b0a6a0ddd50ca3d18c190393ca23d1531b88a" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.469893 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn"] Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.471546 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.476467 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.482791 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn"] Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.534966 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.535023 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.535086 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmn7\" (UniqueName: \"kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.636824 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmn7\" (UniqueName: \"kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.637023 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.637066 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.637737 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.637951 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.671699 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmn7\" (UniqueName: \"kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:49 crc kubenswrapper[4873]: I0219 09:54:49.786967 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:50 crc kubenswrapper[4873]: I0219 09:54:50.202791 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn"] Feb 19 09:54:50 crc kubenswrapper[4873]: W0219 09:54:50.206347 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0709e82b_60e9_4aed_8e42_e39928e74c21.slice/crio-a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a WatchSource:0}: Error finding container a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a: Status 404 returned error can't find the container with id a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a Feb 19 09:54:50 crc kubenswrapper[4873]: I0219 09:54:50.230292 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerStarted","Data":"a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a"} Feb 19 09:54:51 crc kubenswrapper[4873]: I0219 09:54:51.238966 4873 generic.go:334] "Generic (PLEG): container finished" podID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerID="53795ed9c172f5e15725bb68ee5247725d9d892ce1cec0e62f07e21ea580e8d5" exitCode=0 Feb 19 09:54:51 crc kubenswrapper[4873]: I0219 09:54:51.239050 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerDied","Data":"53795ed9c172f5e15725bb68ee5247725d9d892ce1cec0e62f07e21ea580e8d5"} Feb 19 09:54:52 crc kubenswrapper[4873]: I0219 09:54:52.245507 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerStarted","Data":"3249a389c2df46ad94e087432a2b02af4fdb5db764822d0ff5a78c43d9aa131d"} Feb 19 09:54:53 crc kubenswrapper[4873]: I0219 09:54:53.252098 4873 generic.go:334] "Generic (PLEG): container finished" podID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerID="3249a389c2df46ad94e087432a2b02af4fdb5db764822d0ff5a78c43d9aa131d" exitCode=0 Feb 19 09:54:53 crc kubenswrapper[4873]: I0219 09:54:53.252179 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerDied","Data":"3249a389c2df46ad94e087432a2b02af4fdb5db764822d0ff5a78c43d9aa131d"} Feb 19 09:54:53 crc kubenswrapper[4873]: I0219 09:54:53.316511 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z4jgv" Feb 19 09:54:54 crc kubenswrapper[4873]: I0219 09:54:54.260140 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerStarted","Data":"7e8e8aee15311b068eb3c75635d192c9297ffb7e135a97626a265ba209a8876f"} Feb 19 09:54:54 crc kubenswrapper[4873]: I0219 09:54:54.283193 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" podStartSLOduration=4.4068980700000004 podStartE2EDuration="5.283166783s" podCreationTimestamp="2026-02-19 09:54:49 +0000 UTC" firstStartedPulling="2026-02-19 09:54:51.241586638 +0000 UTC m=+600.531018316" lastFinishedPulling="2026-02-19 09:54:52.117855391 +0000 UTC m=+601.407287029" observedRunningTime="2026-02-19 09:54:54.279749644 +0000 UTC m=+603.569181322" watchObservedRunningTime="2026-02-19 09:54:54.283166783 +0000 UTC m=+603.572598461" Feb 19 09:54:55 crc kubenswrapper[4873]: I0219 09:54:55.270647 4873 generic.go:334] "Generic (PLEG): container finished" podID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerID="7e8e8aee15311b068eb3c75635d192c9297ffb7e135a97626a265ba209a8876f" exitCode=0 Feb 19 09:54:55 crc kubenswrapper[4873]: I0219 09:54:55.270695 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerDied","Data":"7e8e8aee15311b068eb3c75635d192c9297ffb7e135a97626a265ba209a8876f"} Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.595318 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.729113 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util\") pod \"0709e82b-60e9-4aed-8e42-e39928e74c21\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.729534 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmn7\" (UniqueName: \"kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7\") pod \"0709e82b-60e9-4aed-8e42-e39928e74c21\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.729624 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle\") pod \"0709e82b-60e9-4aed-8e42-e39928e74c21\" (UID: \"0709e82b-60e9-4aed-8e42-e39928e74c21\") " Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.732121 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle" (OuterVolumeSpecName: "bundle") pod "0709e82b-60e9-4aed-8e42-e39928e74c21" (UID: "0709e82b-60e9-4aed-8e42-e39928e74c21"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.737627 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7" (OuterVolumeSpecName: "kube-api-access-xgmn7") pod "0709e82b-60e9-4aed-8e42-e39928e74c21" (UID: "0709e82b-60e9-4aed-8e42-e39928e74c21"). InnerVolumeSpecName "kube-api-access-xgmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.751775 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util" (OuterVolumeSpecName: "util") pod "0709e82b-60e9-4aed-8e42-e39928e74c21" (UID: "0709e82b-60e9-4aed-8e42-e39928e74c21"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.831192 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgmn7\" (UniqueName: \"kubernetes.io/projected/0709e82b-60e9-4aed-8e42-e39928e74c21-kube-api-access-xgmn7\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.831377 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:56 crc kubenswrapper[4873]: I0219 09:54:56.831524 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0709e82b-60e9-4aed-8e42-e39928e74c21-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:57 crc kubenswrapper[4873]: I0219 09:54:57.288168 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" event={"ID":"0709e82b-60e9-4aed-8e42-e39928e74c21","Type":"ContainerDied","Data":"a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a"} Feb 19 09:54:57 crc kubenswrapper[4873]: I0219 09:54:57.288219 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8faa0998d8cb7380721d946dec8c42971f2c370b0eddff7fbd229e41aab774a" Feb 19 09:54:57 crc kubenswrapper[4873]: I0219 09:54:57.288319 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.276212 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww"] Feb 19 09:55:06 crc kubenswrapper[4873]: E0219 09:55:06.277012 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="util" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.277047 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="util" Feb 19 09:55:06 crc kubenswrapper[4873]: E0219 09:55:06.277068 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="pull" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.277076 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="pull" Feb 19 09:55:06 crc kubenswrapper[4873]: E0219 09:55:06.277092 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="extract" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.277114 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="extract" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.277237 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0709e82b-60e9-4aed-8e42-e39928e74c21" containerName="extract" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.277682 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.282030 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-xz4ck" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.282051 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.283783 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.289565 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.348232 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4q86\" (UniqueName: \"kubernetes.io/projected/5d79d4d8-e595-4aec-bc0b-7347b826c257-kube-api-access-h4q86\") pod \"obo-prometheus-operator-68bc856cb9-v7nww\" (UID: \"5d79d4d8-e595-4aec-bc0b-7347b826c257\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.398337 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.399215 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.401309 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.401361 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gdtr7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.408133 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.408953 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.414306 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.424893 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.452799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4q86\" (UniqueName: \"kubernetes.io/projected/5d79d4d8-e595-4aec-bc0b-7347b826c257-kube-api-access-h4q86\") pod \"obo-prometheus-operator-68bc856cb9-v7nww\" (UID: \"5d79d4d8-e595-4aec-bc0b-7347b826c257\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.473400 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4q86\" (UniqueName: \"kubernetes.io/projected/5d79d4d8-e595-4aec-bc0b-7347b826c257-kube-api-access-h4q86\") pod \"obo-prometheus-operator-68bc856cb9-v7nww\" (UID: \"5d79d4d8-e595-4aec-bc0b-7347b826c257\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.554508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.554800 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.554856 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.554896 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.592049 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7wtlv"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.592908 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.593405 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.595804 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-xzt2d" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.595804 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655442 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjwl\" (UniqueName: \"kubernetes.io/projected/b23281d2-935e-47c1-bc83-8d00c7649625-kube-api-access-ssjwl\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655501 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655535 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655558 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655593 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.655620 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b23281d2-935e-47c1-bc83-8d00c7649625-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.658966 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7wtlv"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.659132 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.659357 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4724c979-0040-4017-86ce-78d2a8bdb44e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-qptdb\" (UID: \"4724c979-0040-4017-86ce-78d2a8bdb44e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.659549 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.679893 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3180318c-7d9a-454b-8de4-887fabae362b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7\" (UID: \"3180318c-7d9a-454b-8de4-887fabae362b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.715411 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.739913 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.761035 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b23281d2-935e-47c1-bc83-8d00c7649625-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.761090 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjwl\" (UniqueName: \"kubernetes.io/projected/b23281d2-935e-47c1-bc83-8d00c7649625-kube-api-access-ssjwl\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.777140 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b23281d2-935e-47c1-bc83-8d00c7649625-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.788827 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjwl\" (UniqueName: \"kubernetes.io/projected/b23281d2-935e-47c1-bc83-8d00c7649625-kube-api-access-ssjwl\") pod \"observability-operator-59bdc8b94-7wtlv\" (UID: \"b23281d2-935e-47c1-bc83-8d00c7649625\") " pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.804412 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8sflg"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.805096 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.807447 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h248j" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.809103 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8sflg"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.861975 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22x24\" (UniqueName: \"kubernetes.io/projected/ea1cc2c7-c932-4b3d-b718-d017eb06163f-kube-api-access-22x24\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.862040 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea1cc2c7-c932-4b3d-b718-d017eb06163f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.913007 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.967161 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22x24\" (UniqueName: \"kubernetes.io/projected/ea1cc2c7-c932-4b3d-b718-d017eb06163f-kube-api-access-22x24\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.967224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea1cc2c7-c932-4b3d-b718-d017eb06163f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.967996 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea1cc2c7-c932-4b3d-b718-d017eb06163f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.974796 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww"] Feb 19 09:55:06 crc kubenswrapper[4873]: I0219 09:55:06.987263 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22x24\" (UniqueName: \"kubernetes.io/projected/ea1cc2c7-c932-4b3d-b718-d017eb06163f-kube-api-access-22x24\") pod \"perses-operator-5bf474d74f-8sflg\" (UID: \"ea1cc2c7-c932-4b3d-b718-d017eb06163f\") " pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.054090 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7"] Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.094474 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb"] Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.139934 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.227235 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7wtlv"] Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.378243 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" event={"ID":"b23281d2-935e-47c1-bc83-8d00c7649625","Type":"ContainerStarted","Data":"66266c4dcb442623b33dce694f35f2f8d293e2117850e91f3ac0c23322c1e20a"} Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.380566 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" event={"ID":"5d79d4d8-e595-4aec-bc0b-7347b826c257","Type":"ContainerStarted","Data":"2bd1ed5d5780e1aabb0d34f30fbdd2a14ead6a502944d4dab60b868012abf811"} Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.381734 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" event={"ID":"3180318c-7d9a-454b-8de4-887fabae362b","Type":"ContainerStarted","Data":"41e45e92597c28002120a74d32a9fa24ac5457f23a3a4781e03d58219c226e05"} Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.383138 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" event={"ID":"4724c979-0040-4017-86ce-78d2a8bdb44e","Type":"ContainerStarted","Data":"2e50cdba4d45ebf1bc8d403a88feb427a8c70b23b0858b1d75be00f2ec8c8e34"} Feb 19 09:55:07 crc kubenswrapper[4873]: I0219 09:55:07.538380 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8sflg"] Feb 19 09:55:07 crc kubenswrapper[4873]: W0219 09:55:07.539930 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea1cc2c7_c932_4b3d_b718_d017eb06163f.slice/crio-1e978609538a8f9d3cd46e166066d50f8c9196ee54490f470b7d3182bdd86b03 WatchSource:0}: Error finding container 1e978609538a8f9d3cd46e166066d50f8c9196ee54490f470b7d3182bdd86b03: Status 404 returned error can't find the container with id 1e978609538a8f9d3cd46e166066d50f8c9196ee54490f470b7d3182bdd86b03 Feb 19 09:55:08 crc kubenswrapper[4873]: I0219 09:55:08.407428 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" event={"ID":"ea1cc2c7-c932-4b3d-b718-d017eb06163f","Type":"ContainerStarted","Data":"1e978609538a8f9d3cd46e166066d50f8c9196ee54490f470b7d3182bdd86b03"} Feb 19 09:55:16 crc kubenswrapper[4873]: I0219 09:55:16.454752 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" event={"ID":"b23281d2-935e-47c1-bc83-8d00c7649625","Type":"ContainerStarted","Data":"954d062155d7c95c0378e0a95b64ba43e82ec0a50bc1e31f4897792f425f4997"} Feb 19 09:55:16 crc kubenswrapper[4873]: I0219 09:55:16.455298 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:16 crc kubenswrapper[4873]: I0219 09:55:16.455980 4873 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7wtlv container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.36:8081/healthz\": dial tcp 10.217.0.36:8081: connect: connection refused" start-of-body= Feb 19 09:55:16 crc kubenswrapper[4873]: I0219 09:55:16.456033 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" podUID="b23281d2-935e-47c1-bc83-8d00c7649625" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.36:8081/healthz\": dial tcp 10.217.0.36:8081: connect: connection refused" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.001252 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.034186 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7wtlv" podStartSLOduration=2.058750119 podStartE2EDuration="11.034162612s" podCreationTimestamp="2026-02-19 09:55:06 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.24648909 +0000 UTC m=+616.535920728" lastFinishedPulling="2026-02-19 09:55:16.221901583 +0000 UTC m=+625.511333221" observedRunningTime="2026-02-19 09:55:16.476852306 +0000 UTC m=+625.766283934" watchObservedRunningTime="2026-02-19 09:55:17.034162612 +0000 UTC m=+626.323594260" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.464367 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" event={"ID":"ea1cc2c7-c932-4b3d-b718-d017eb06163f","Type":"ContainerStarted","Data":"dffb83fa120d1dc7bd80035ea64649aa1137fd7ec5c7998514e9681f860bc874"} Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.464515 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.466915 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" event={"ID":"3180318c-7d9a-454b-8de4-887fabae362b","Type":"ContainerStarted","Data":"e3078666cdb9158697244ee3b9dd6b856404687a4bb3ce880f9f57c7de9a2e3e"} Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.468901 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" event={"ID":"4724c979-0040-4017-86ce-78d2a8bdb44e","Type":"ContainerStarted","Data":"b1e1720c28432393ef1b4d621cfbf0b0639493691ee2131d04b35d73ce3957f0"} Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.471211 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" event={"ID":"5d79d4d8-e595-4aec-bc0b-7347b826c257","Type":"ContainerStarted","Data":"82be0e790e03da7caac4e8f6fd3f220fcb680a98df8bf464f9d8fad94d2cdeec"} Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.502812 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" podStartSLOduration=2.866144194 podStartE2EDuration="11.502786066s" podCreationTimestamp="2026-02-19 09:55:06 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.544040552 +0000 UTC m=+616.833472190" lastFinishedPulling="2026-02-19 09:55:16.180682434 +0000 UTC m=+625.470114062" observedRunningTime="2026-02-19 09:55:17.501953786 +0000 UTC m=+626.791385464" watchObservedRunningTime="2026-02-19 09:55:17.502786066 +0000 UTC m=+626.792217704" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.535872 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7" podStartSLOduration=2.441441962 podStartE2EDuration="11.535843697s" podCreationTimestamp="2026-02-19 09:55:06 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.082455861 +0000 UTC m=+616.371887499" lastFinishedPulling="2026-02-19 09:55:16.176857596 +0000 UTC m=+625.466289234" observedRunningTime="2026-02-19 09:55:17.527407033 +0000 UTC m=+626.816838681" watchObservedRunningTime="2026-02-19 09:55:17.535843697 +0000 UTC m=+626.825275345" Feb 19 09:55:17 crc kubenswrapper[4873]: I0219 09:55:17.554175 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7496f9f864-qptdb" podStartSLOduration=2.487956504 podStartE2EDuration="11.554155539s" podCreationTimestamp="2026-02-19 09:55:06 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.109832212 +0000 UTC m=+616.399263850" lastFinishedPulling="2026-02-19 09:55:16.176031237 +0000 UTC m=+625.465462885" observedRunningTime="2026-02-19 09:55:17.549591864 +0000 UTC m=+626.839023552" watchObservedRunningTime="2026-02-19 09:55:17.554155539 +0000 UTC m=+626.843587207" Feb 19 09:55:27 crc kubenswrapper[4873]: I0219 09:55:27.144846 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8sflg" Feb 19 09:55:27 crc kubenswrapper[4873]: I0219 09:55:27.162822 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v7nww" podStartSLOduration=11.973090209 podStartE2EDuration="21.162791398s" podCreationTimestamp="2026-02-19 09:55:06 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.009261956 +0000 UTC m=+616.298693594" lastFinishedPulling="2026-02-19 09:55:16.198963145 +0000 UTC m=+625.488394783" observedRunningTime="2026-02-19 09:55:17.621677904 +0000 UTC m=+626.911109542" watchObservedRunningTime="2026-02-19 09:55:27.162791398 +0000 UTC m=+636.452223086" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.594130 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv"] Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.596017 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.597865 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.610294 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv"] Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.795250 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.795325 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.795347 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkj5g\" (UniqueName: \"kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.896557 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.896639 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.896663 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkj5g\" (UniqueName: \"kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.897096 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.897448 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:43 crc kubenswrapper[4873]: I0219 09:55:43.930043 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkj5g\" (UniqueName: \"kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:44 crc kubenswrapper[4873]: I0219 09:55:44.218708 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:44 crc kubenswrapper[4873]: I0219 09:55:44.643504 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv"] Feb 19 09:55:45 crc kubenswrapper[4873]: I0219 09:55:45.646742 4873 generic.go:334] "Generic (PLEG): container finished" podID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerID="a6819f539c003d3080c8371d4c8581e7ea7bd72a27cc888742e3f2b0e593c378" exitCode=0 Feb 19 09:55:45 crc kubenswrapper[4873]: I0219 09:55:45.646787 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" event={"ID":"14a07337-b89d-4574-aa0f-f9a3cdebdd48","Type":"ContainerDied","Data":"a6819f539c003d3080c8371d4c8581e7ea7bd72a27cc888742e3f2b0e593c378"} Feb 19 09:55:45 crc kubenswrapper[4873]: I0219 09:55:45.646815 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" event={"ID":"14a07337-b89d-4574-aa0f-f9a3cdebdd48","Type":"ContainerStarted","Data":"50044929f3c9b7b6a3f1c5020a712e49f731400273f63f27992d4494a8336eb0"} Feb 19 09:55:47 crc kubenswrapper[4873]: I0219 09:55:47.665426 4873 generic.go:334] "Generic (PLEG): container finished" podID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerID="3d1c39529f6c4426f0fd82b65011298f8e3885e6afc1443fbf3aeb520d92c38f" exitCode=0 Feb 19 09:55:47 crc kubenswrapper[4873]: I0219 09:55:47.665510 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" event={"ID":"14a07337-b89d-4574-aa0f-f9a3cdebdd48","Type":"ContainerDied","Data":"3d1c39529f6c4426f0fd82b65011298f8e3885e6afc1443fbf3aeb520d92c38f"} Feb 19 09:55:48 crc kubenswrapper[4873]: I0219 09:55:48.675565 4873 generic.go:334] "Generic (PLEG): container finished" podID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerID="fffba7f3fa8660d9ca2ad3df6b91fac76d1d73d78fce4e9e00e4992f32248212" exitCode=0 Feb 19 09:55:48 crc kubenswrapper[4873]: I0219 09:55:48.675671 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" event={"ID":"14a07337-b89d-4574-aa0f-f9a3cdebdd48","Type":"ContainerDied","Data":"fffba7f3fa8660d9ca2ad3df6b91fac76d1d73d78fce4e9e00e4992f32248212"} Feb 19 09:55:49 crc kubenswrapper[4873]: I0219 09:55:49.955437 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.037823 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util\") pod \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.037998 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle\") pod \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.038053 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkj5g\" (UniqueName: \"kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g\") pod \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\" (UID: \"14a07337-b89d-4574-aa0f-f9a3cdebdd48\") " Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.039921 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle" (OuterVolumeSpecName: "bundle") pod "14a07337-b89d-4574-aa0f-f9a3cdebdd48" (UID: "14a07337-b89d-4574-aa0f-f9a3cdebdd48"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.046326 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g" (OuterVolumeSpecName: "kube-api-access-jkj5g") pod "14a07337-b89d-4574-aa0f-f9a3cdebdd48" (UID: "14a07337-b89d-4574-aa0f-f9a3cdebdd48"). InnerVolumeSpecName "kube-api-access-jkj5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.059005 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util" (OuterVolumeSpecName: "util") pod "14a07337-b89d-4574-aa0f-f9a3cdebdd48" (UID: "14a07337-b89d-4574-aa0f-f9a3cdebdd48"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.138773 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.138811 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkj5g\" (UniqueName: \"kubernetes.io/projected/14a07337-b89d-4574-aa0f-f9a3cdebdd48-kube-api-access-jkj5g\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.138823 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a07337-b89d-4574-aa0f-f9a3cdebdd48-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.688419 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" event={"ID":"14a07337-b89d-4574-aa0f-f9a3cdebdd48","Type":"ContainerDied","Data":"50044929f3c9b7b6a3f1c5020a712e49f731400273f63f27992d4494a8336eb0"} Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.688696 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50044929f3c9b7b6a3f1c5020a712e49f731400273f63f27992d4494a8336eb0" Feb 19 09:55:50 crc kubenswrapper[4873]: I0219 09:55:50.688533 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.451526 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qlgxw"] Feb 19 09:55:52 crc kubenswrapper[4873]: E0219 09:55:52.451912 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="pull" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.451940 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="pull" Feb 19 09:55:52 crc kubenswrapper[4873]: E0219 09:55:52.451969 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="util" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.451988 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="util" Feb 19 09:55:52 crc kubenswrapper[4873]: E0219 09:55:52.452012 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="extract" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.452026 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="extract" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.452272 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a07337-b89d-4574-aa0f-f9a3cdebdd48" containerName="extract" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.452897 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.455016 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-snb5x" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.455657 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.457182 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.475640 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qlgxw"] Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.568723 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckxs\" (UniqueName: \"kubernetes.io/projected/f7f28c8a-4571-485c-96a2-fc1c5856e3ea-kube-api-access-vckxs\") pod \"nmstate-operator-694c9596b7-qlgxw\" (UID: \"f7f28c8a-4571-485c-96a2-fc1c5856e3ea\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.670530 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vckxs\" (UniqueName: \"kubernetes.io/projected/f7f28c8a-4571-485c-96a2-fc1c5856e3ea-kube-api-access-vckxs\") pod \"nmstate-operator-694c9596b7-qlgxw\" (UID: \"f7f28c8a-4571-485c-96a2-fc1c5856e3ea\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.703076 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vckxs\" (UniqueName: \"kubernetes.io/projected/f7f28c8a-4571-485c-96a2-fc1c5856e3ea-kube-api-access-vckxs\") pod \"nmstate-operator-694c9596b7-qlgxw\" (UID: \"f7f28c8a-4571-485c-96a2-fc1c5856e3ea\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" Feb 19 09:55:52 crc kubenswrapper[4873]: I0219 09:55:52.784839 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" Feb 19 09:55:53 crc kubenswrapper[4873]: I0219 09:55:53.136763 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qlgxw"] Feb 19 09:55:53 crc kubenswrapper[4873]: W0219 09:55:53.150317 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7f28c8a_4571_485c_96a2_fc1c5856e3ea.slice/crio-3a58557f6a8022c3fd87a066a9c365bc8597898a03ce230f60be2b3d0e44ccf2 WatchSource:0}: Error finding container 3a58557f6a8022c3fd87a066a9c365bc8597898a03ce230f60be2b3d0e44ccf2: Status 404 returned error can't find the container with id 3a58557f6a8022c3fd87a066a9c365bc8597898a03ce230f60be2b3d0e44ccf2 Feb 19 09:55:53 crc kubenswrapper[4873]: I0219 09:55:53.712505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" event={"ID":"f7f28c8a-4571-485c-96a2-fc1c5856e3ea","Type":"ContainerStarted","Data":"3a58557f6a8022c3fd87a066a9c365bc8597898a03ce230f60be2b3d0e44ccf2"} Feb 19 09:55:55 crc kubenswrapper[4873]: I0219 09:55:55.726489 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" event={"ID":"f7f28c8a-4571-485c-96a2-fc1c5856e3ea","Type":"ContainerStarted","Data":"bcd3218e43adeea5d5a965457865b083ce1cb88c78f8236fbba93be41cbb2f5b"} Feb 19 09:55:55 crc kubenswrapper[4873]: I0219 09:55:55.752987 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-qlgxw" podStartSLOduration=1.7699926559999999 podStartE2EDuration="3.752962488s" podCreationTimestamp="2026-02-19 09:55:52 +0000 UTC" firstStartedPulling="2026-02-19 09:55:53.153850684 +0000 UTC m=+662.443282322" lastFinishedPulling="2026-02-19 09:55:55.136820516 +0000 UTC m=+664.426252154" observedRunningTime="2026-02-19 09:55:55.750462126 +0000 UTC m=+665.039893774" watchObservedRunningTime="2026-02-19 09:55:55.752962488 +0000 UTC m=+665.042394176" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.644498 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8jgss"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.645358 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.647134 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hzwdm" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.661740 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8jgss"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.670176 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.671467 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.672873 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.678418 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-75txf"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.679586 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.705182 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.727794 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-dbus-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.727851 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8wr\" (UniqueName: \"kubernetes.io/projected/7af074a2-c1f7-4253-8efc-065748e0452b-kube-api-access-hf8wr\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.727882 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-nmstate-lock\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.727911 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7af074a2-c1f7-4253-8efc-065748e0452b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.727995 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7dlv\" (UniqueName: \"kubernetes.io/projected/62408ce4-73ce-4726-91c1-96f645c39dee-kube-api-access-q7dlv\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.728031 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9bz\" (UniqueName: \"kubernetes.io/projected/3b960434-ef37-45ae-aa50-8d719c8e2df5-kube-api-access-4m9bz\") pod \"nmstate-metrics-58c85c668d-8jgss\" (UID: \"3b960434-ef37-45ae-aa50-8d719c8e2df5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.728062 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-ovs-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.792454 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.800187 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.810550 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rjk6l" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.810753 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.810868 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.817643 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828758 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9bz\" (UniqueName: \"kubernetes.io/projected/3b960434-ef37-45ae-aa50-8d719c8e2df5-kube-api-access-4m9bz\") pod \"nmstate-metrics-58c85c668d-8jgss\" (UID: \"3b960434-ef37-45ae-aa50-8d719c8e2df5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828813 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9b3c6348-1c17-4774-9739-7a1dd3021d81-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828838 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-ovs-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828877 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-dbus-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828896 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf8wr\" (UniqueName: \"kubernetes.io/projected/7af074a2-c1f7-4253-8efc-065748e0452b-kube-api-access-hf8wr\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828912 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-nmstate-lock\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828928 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7af074a2-c1f7-4253-8efc-065748e0452b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828956 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzd9\" (UniqueName: \"kubernetes.io/projected/9b3c6348-1c17-4774-9739-7a1dd3021d81-kube-api-access-8vzd9\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.828979 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.829004 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7dlv\" (UniqueName: \"kubernetes.io/projected/62408ce4-73ce-4726-91c1-96f645c39dee-kube-api-access-q7dlv\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.830520 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-ovs-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.830978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-nmstate-lock\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.831372 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/62408ce4-73ce-4726-91c1-96f645c39dee-dbus-socket\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.845811 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7af074a2-c1f7-4253-8efc-065748e0452b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.846653 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9bz\" (UniqueName: \"kubernetes.io/projected/3b960434-ef37-45ae-aa50-8d719c8e2df5-kube-api-access-4m9bz\") pod \"nmstate-metrics-58c85c668d-8jgss\" (UID: \"3b960434-ef37-45ae-aa50-8d719c8e2df5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.847160 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf8wr\" (UniqueName: \"kubernetes.io/projected/7af074a2-c1f7-4253-8efc-065748e0452b-kube-api-access-hf8wr\") pod \"nmstate-webhook-866bcb46dc-nfh8w\" (UID: \"7af074a2-c1f7-4253-8efc-065748e0452b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.850611 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7dlv\" (UniqueName: \"kubernetes.io/projected/62408ce4-73ce-4726-91c1-96f645c39dee-kube-api-access-q7dlv\") pod \"nmstate-handler-75txf\" (UID: \"62408ce4-73ce-4726-91c1-96f645c39dee\") " pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.930517 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9b3c6348-1c17-4774-9739-7a1dd3021d81-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.930613 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzd9\" (UniqueName: \"kubernetes.io/projected/9b3c6348-1c17-4774-9739-7a1dd3021d81-kube-api-access-8vzd9\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.930645 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: E0219 09:55:56.930779 4873 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 09:55:56 crc kubenswrapper[4873]: E0219 09:55:56.930836 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert podName:9b3c6348-1c17-4774-9739-7a1dd3021d81 nodeName:}" failed. No retries permitted until 2026-02-19 09:55:57.430816823 +0000 UTC m=+666.720248461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-9cr2m" (UID: "9b3c6348-1c17-4774-9739-7a1dd3021d81") : secret "plugin-serving-cert" not found Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.931919 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9b3c6348-1c17-4774-9739-7a1dd3021d81-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.952411 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzd9\" (UniqueName: \"kubernetes.io/projected/9b3c6348-1c17-4774-9739-7a1dd3021d81-kube-api-access-8vzd9\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.965371 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.972358 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9fd68db6b-q4dk6"] Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.973182 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:56 crc kubenswrapper[4873]: I0219 09:55:56.985512 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.003331 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031354 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-oauth-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031403 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031421 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-trusted-ca-bundle\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031438 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031459 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz95l\" (UniqueName: \"kubernetes.io/projected/e296ec6d-4270-44f2-a73e-16ca3ac286f2-kube-api-access-sz95l\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031487 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-service-ca\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.031537 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-oauth-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.046715 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9fd68db6b-q4dk6"] Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.132842 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-oauth-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133196 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-oauth-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133225 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133240 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-trusted-ca-bundle\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133254 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133276 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz95l\" (UniqueName: \"kubernetes.io/projected/e296ec6d-4270-44f2-a73e-16ca3ac286f2-kube-api-access-sz95l\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.133304 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-service-ca\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.134010 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-oauth-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.134210 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.134733 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-trusted-ca-bundle\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.135246 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e296ec6d-4270-44f2-a73e-16ca3ac286f2-service-ca\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.136543 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-oauth-config\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.136663 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e296ec6d-4270-44f2-a73e-16ca3ac286f2-console-serving-cert\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.149067 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz95l\" (UniqueName: \"kubernetes.io/projected/e296ec6d-4270-44f2-a73e-16ca3ac286f2-kube-api-access-sz95l\") pod \"console-9fd68db6b-q4dk6\" (UID: \"e296ec6d-4270-44f2-a73e-16ca3ac286f2\") " pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.216738 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8jgss"] Feb 19 09:55:57 crc kubenswrapper[4873]: W0219 09:55:57.220486 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b960434_ef37_45ae_aa50_8d719c8e2df5.slice/crio-4d2f6fab116e5c3697d562887c74f6d00a719f7bdeb4748b1a1db81da93f7e89 WatchSource:0}: Error finding container 4d2f6fab116e5c3697d562887c74f6d00a719f7bdeb4748b1a1db81da93f7e89: Status 404 returned error can't find the container with id 4d2f6fab116e5c3697d562887c74f6d00a719f7bdeb4748b1a1db81da93f7e89 Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.331134 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.441577 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.447892 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3c6348-1c17-4774-9739-7a1dd3021d81-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-9cr2m\" (UID: \"9b3c6348-1c17-4774-9739-7a1dd3021d81\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.506603 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w"] Feb 19 09:55:57 crc kubenswrapper[4873]: W0219 09:55:57.512992 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af074a2_c1f7_4253_8efc_065748e0452b.slice/crio-92b1864a7aa4bd5eace02fc56e5084af9d69850210c3ebde41dc98bddbc366d5 WatchSource:0}: Error finding container 92b1864a7aa4bd5eace02fc56e5084af9d69850210c3ebde41dc98bddbc366d5: Status 404 returned error can't find the container with id 92b1864a7aa4bd5eace02fc56e5084af9d69850210c3ebde41dc98bddbc366d5 Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.610995 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9fd68db6b-q4dk6"] Feb 19 09:55:57 crc kubenswrapper[4873]: W0219 09:55:57.618506 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode296ec6d_4270_44f2_a73e_16ca3ac286f2.slice/crio-658e1ca75f45b4f5981d224251d0cdfc634819e233906e2e2af271643db59640 WatchSource:0}: Error finding container 658e1ca75f45b4f5981d224251d0cdfc634819e233906e2e2af271643db59640: Status 404 returned error can't find the container with id 658e1ca75f45b4f5981d224251d0cdfc634819e233906e2e2af271643db59640 Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.730558 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.742341 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-75txf" event={"ID":"62408ce4-73ce-4726-91c1-96f645c39dee","Type":"ContainerStarted","Data":"e50ab46848007b49de33727979617347fac41a1bf3840c89b1336dac8785c0a3"} Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.745906 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9fd68db6b-q4dk6" event={"ID":"e296ec6d-4270-44f2-a73e-16ca3ac286f2","Type":"ContainerStarted","Data":"db2dcca52446f1249df65049e1489e4d5b3d262620fd3e3a2c380dc5f4e3673d"} Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.745970 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9fd68db6b-q4dk6" event={"ID":"e296ec6d-4270-44f2-a73e-16ca3ac286f2","Type":"ContainerStarted","Data":"658e1ca75f45b4f5981d224251d0cdfc634819e233906e2e2af271643db59640"} Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.749244 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" event={"ID":"3b960434-ef37-45ae-aa50-8d719c8e2df5","Type":"ContainerStarted","Data":"4d2f6fab116e5c3697d562887c74f6d00a719f7bdeb4748b1a1db81da93f7e89"} Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.751579 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" event={"ID":"7af074a2-c1f7-4253-8efc-065748e0452b","Type":"ContainerStarted","Data":"92b1864a7aa4bd5eace02fc56e5084af9d69850210c3ebde41dc98bddbc366d5"} Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.770874 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9fd68db6b-q4dk6" podStartSLOduration=1.770853839 podStartE2EDuration="1.770853839s" podCreationTimestamp="2026-02-19 09:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:57.770724686 +0000 UTC m=+667.060156374" watchObservedRunningTime="2026-02-19 09:55:57.770853839 +0000 UTC m=+667.060285487" Feb 19 09:55:57 crc kubenswrapper[4873]: I0219 09:55:57.964599 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m"] Feb 19 09:55:58 crc kubenswrapper[4873]: I0219 09:55:58.761082 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" event={"ID":"9b3c6348-1c17-4774-9739-7a1dd3021d81","Type":"ContainerStarted","Data":"e8543bb448a5bdde93b55c52d6b51546a92b1f5767f62d241a0fafccab300441"} Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.779239 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" event={"ID":"3b960434-ef37-45ae-aa50-8d719c8e2df5","Type":"ContainerStarted","Data":"bdd6d3fc8ac3b1932764fb8824aba4c7a0cbe2d3ddd8e28196a8914bc6bc0fba"} Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.781748 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" event={"ID":"9b3c6348-1c17-4774-9739-7a1dd3021d81","Type":"ContainerStarted","Data":"e8073e35b5c3f9e49c7ebc00a1c8cb948e7832a45a73476356b23d9d38e5df4b"} Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.786157 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" event={"ID":"7af074a2-c1f7-4253-8efc-065748e0452b","Type":"ContainerStarted","Data":"62560839dff353f19ffcb6aa901023703c7e7659d45c01502ea24939023b23df"} Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.786395 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.795038 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-75txf" event={"ID":"62408ce4-73ce-4726-91c1-96f645c39dee","Type":"ContainerStarted","Data":"c6dc947c1ab150d4460da21d597779792f8da677a79252f400414d17b9b36034"} Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.795516 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.814506 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-9cr2m" podStartSLOduration=2.814212096 podStartE2EDuration="4.814472205s" podCreationTimestamp="2026-02-19 09:55:56 +0000 UTC" firstStartedPulling="2026-02-19 09:55:57.980075244 +0000 UTC m=+667.269506902" lastFinishedPulling="2026-02-19 09:55:59.980335373 +0000 UTC m=+669.269767011" observedRunningTime="2026-02-19 09:56:00.800671316 +0000 UTC m=+670.090103004" watchObservedRunningTime="2026-02-19 09:56:00.814472205 +0000 UTC m=+670.103903903" Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.827677 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" podStartSLOduration=2.361713289 podStartE2EDuration="4.827650519s" podCreationTimestamp="2026-02-19 09:55:56 +0000 UTC" firstStartedPulling="2026-02-19 09:55:57.516219007 +0000 UTC m=+666.805650645" lastFinishedPulling="2026-02-19 09:55:59.982156237 +0000 UTC m=+669.271587875" observedRunningTime="2026-02-19 09:56:00.823909667 +0000 UTC m=+670.113341345" watchObservedRunningTime="2026-02-19 09:56:00.827650519 +0000 UTC m=+670.117082197" Feb 19 09:56:00 crc kubenswrapper[4873]: I0219 09:56:00.854194 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-75txf" podStartSLOduration=1.890434291 podStartE2EDuration="4.854170651s" podCreationTimestamp="2026-02-19 09:55:56 +0000 UTC" firstStartedPulling="2026-02-19 09:55:57.034545844 +0000 UTC m=+666.323977482" lastFinishedPulling="2026-02-19 09:55:59.998282164 +0000 UTC m=+669.287713842" observedRunningTime="2026-02-19 09:56:00.848778779 +0000 UTC m=+670.138210457" watchObservedRunningTime="2026-02-19 09:56:00.854170651 +0000 UTC m=+670.143602329" Feb 19 09:56:02 crc kubenswrapper[4873]: I0219 09:56:02.807267 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" event={"ID":"3b960434-ef37-45ae-aa50-8d719c8e2df5","Type":"ContainerStarted","Data":"dcd03124864bf577101a5845791573a9ded2ec7e06d8d104785cc28702a462d0"} Feb 19 09:56:02 crc kubenswrapper[4873]: I0219 09:56:02.830828 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8jgss" podStartSLOduration=1.572252687 podStartE2EDuration="6.830807858s" podCreationTimestamp="2026-02-19 09:55:56 +0000 UTC" firstStartedPulling="2026-02-19 09:55:57.224052304 +0000 UTC m=+666.513483952" lastFinishedPulling="2026-02-19 09:56:02.482607485 +0000 UTC m=+671.772039123" observedRunningTime="2026-02-19 09:56:02.826804919 +0000 UTC m=+672.116236567" watchObservedRunningTime="2026-02-19 09:56:02.830807858 +0000 UTC m=+672.120239506" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.034978 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-75txf" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.332043 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.332145 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.338695 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.845794 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9fd68db6b-q4dk6" Feb 19 09:56:07 crc kubenswrapper[4873]: I0219 09:56:07.901997 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:56:16 crc kubenswrapper[4873]: I0219 09:56:16.992310 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-nfh8w" Feb 19 09:56:32 crc kubenswrapper[4873]: I0219 09:56:32.961849 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-shnwj" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" containerID="cri-o://cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381" gracePeriod=15 Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.395979 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-shnwj_10aa25f4-7549-468a-b42f-19305ad066dd/console/0.log" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.396288 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538201 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538250 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538301 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ggch\" (UniqueName: \"kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538330 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538420 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538448 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.538468 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle\") pod \"10aa25f4-7549-468a-b42f-19305ad066dd\" (UID: \"10aa25f4-7549-468a-b42f-19305ad066dd\") " Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539137 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539154 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config" (OuterVolumeSpecName: "console-config") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539170 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539197 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca" (OuterVolumeSpecName: "service-ca") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539606 4873 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539632 4873 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539643 4873 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.539655 4873 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10aa25f4-7549-468a-b42f-19305ad066dd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.545577 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch" (OuterVolumeSpecName: "kube-api-access-9ggch") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "kube-api-access-9ggch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.546264 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.548671 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "10aa25f4-7549-468a-b42f-19305ad066dd" (UID: "10aa25f4-7549-468a-b42f-19305ad066dd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.619141 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf"] Feb 19 09:56:33 crc kubenswrapper[4873]: E0219 09:56:33.619389 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.619412 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.619551 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" containerName="console" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.620462 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.623089 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.627027 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf"] Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.644642 4873 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.644687 4873 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/10aa25f4-7549-468a-b42f-19305ad066dd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.644705 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ggch\" (UniqueName: \"kubernetes.io/projected/10aa25f4-7549-468a-b42f-19305ad066dd-kube-api-access-9ggch\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.745635 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.745701 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm7p5\" (UniqueName: \"kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.745730 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.846975 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.847041 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm7p5\" (UniqueName: \"kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.847063 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.847526 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.847818 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.869543 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm7p5\" (UniqueName: \"kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:33 crc kubenswrapper[4873]: I0219 09:56:33.937917 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033686 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-shnwj_10aa25f4-7549-468a-b42f-19305ad066dd/console/0.log" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033756 4873 generic.go:334] "Generic (PLEG): container finished" podID="10aa25f4-7549-468a-b42f-19305ad066dd" containerID="cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381" exitCode=2 Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033814 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-shnwj" event={"ID":"10aa25f4-7549-468a-b42f-19305ad066dd","Type":"ContainerDied","Data":"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381"} Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033877 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-shnwj" event={"ID":"10aa25f4-7549-468a-b42f-19305ad066dd","Type":"ContainerDied","Data":"7a581424f0da8ea44b76eb3be0d323e922f9fdfbe4bef5b6c66bc43929d92666"} Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033888 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-shnwj" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.033906 4873 scope.go:117] "RemoveContainer" containerID="cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.053909 4873 scope.go:117] "RemoveContainer" containerID="cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381" Feb 19 09:56:34 crc kubenswrapper[4873]: E0219 09:56:34.056660 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381\": container with ID starting with cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381 not found: ID does not exist" containerID="cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.056721 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381"} err="failed to get container status \"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381\": rpc error: code = NotFound desc = could not find container \"cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381\": container with ID starting with cc3b6e572218ab345f1c66c2372f7ae648deb477f5bc61959e0bd3585166d381 not found: ID does not exist" Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.082955 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.086671 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-shnwj"] Feb 19 09:56:34 crc kubenswrapper[4873]: I0219 09:56:34.160484 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf"] Feb 19 09:56:35 crc kubenswrapper[4873]: I0219 09:56:35.041532 4873 generic.go:334] "Generic (PLEG): container finished" podID="7a09955d-14f6-4877-bcb4-701d57165495" containerID="c6bb6bf413ed07d79cee369d269cc426142baaf9a8b0bab4a28130a750e9fcc6" exitCode=0 Feb 19 09:56:35 crc kubenswrapper[4873]: I0219 09:56:35.041830 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" event={"ID":"7a09955d-14f6-4877-bcb4-701d57165495","Type":"ContainerDied","Data":"c6bb6bf413ed07d79cee369d269cc426142baaf9a8b0bab4a28130a750e9fcc6"} Feb 19 09:56:35 crc kubenswrapper[4873]: I0219 09:56:35.041851 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" event={"ID":"7a09955d-14f6-4877-bcb4-701d57165495","Type":"ContainerStarted","Data":"4b33092e42b0328d0fa22229ed7f206f1d4f098b3756047293dd64c3d91c97b7"} Feb 19 09:56:35 crc kubenswrapper[4873]: I0219 09:56:35.493451 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10aa25f4-7549-468a-b42f-19305ad066dd" path="/var/lib/kubelet/pods/10aa25f4-7549-468a-b42f-19305ad066dd/volumes" Feb 19 09:56:37 crc kubenswrapper[4873]: I0219 09:56:37.060085 4873 generic.go:334] "Generic (PLEG): container finished" podID="7a09955d-14f6-4877-bcb4-701d57165495" containerID="0d082f7d81cc31d82999718fe69b040bf30f593a9e99dca26bec10c52b3e11a4" exitCode=0 Feb 19 09:56:37 crc kubenswrapper[4873]: I0219 09:56:37.060167 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" event={"ID":"7a09955d-14f6-4877-bcb4-701d57165495","Type":"ContainerDied","Data":"0d082f7d81cc31d82999718fe69b040bf30f593a9e99dca26bec10c52b3e11a4"} Feb 19 09:56:38 crc kubenswrapper[4873]: I0219 09:56:38.072717 4873 generic.go:334] "Generic (PLEG): container finished" podID="7a09955d-14f6-4877-bcb4-701d57165495" containerID="d9fc4b1b7f625573fa52768af6d8503a3364a46ce933263c22dfdc17ed349bf8" exitCode=0 Feb 19 09:56:38 crc kubenswrapper[4873]: I0219 09:56:38.072807 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" event={"ID":"7a09955d-14f6-4877-bcb4-701d57165495","Type":"ContainerDied","Data":"d9fc4b1b7f625573fa52768af6d8503a3364a46ce933263c22dfdc17ed349bf8"} Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.352958 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.530844 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm7p5\" (UniqueName: \"kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5\") pod \"7a09955d-14f6-4877-bcb4-701d57165495\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.530897 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util\") pod \"7a09955d-14f6-4877-bcb4-701d57165495\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.530932 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle\") pod \"7a09955d-14f6-4877-bcb4-701d57165495\" (UID: \"7a09955d-14f6-4877-bcb4-701d57165495\") " Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.532447 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle" (OuterVolumeSpecName: "bundle") pod "7a09955d-14f6-4877-bcb4-701d57165495" (UID: "7a09955d-14f6-4877-bcb4-701d57165495"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.538870 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5" (OuterVolumeSpecName: "kube-api-access-dm7p5") pod "7a09955d-14f6-4877-bcb4-701d57165495" (UID: "7a09955d-14f6-4877-bcb4-701d57165495"). InnerVolumeSpecName "kube-api-access-dm7p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.632257 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm7p5\" (UniqueName: \"kubernetes.io/projected/7a09955d-14f6-4877-bcb4-701d57165495-kube-api-access-dm7p5\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.632471 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.865800 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util" (OuterVolumeSpecName: "util") pod "7a09955d-14f6-4877-bcb4-701d57165495" (UID: "7a09955d-14f6-4877-bcb4-701d57165495"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:39 crc kubenswrapper[4873]: I0219 09:56:39.935184 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a09955d-14f6-4877-bcb4-701d57165495-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:40 crc kubenswrapper[4873]: I0219 09:56:40.087786 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" event={"ID":"7a09955d-14f6-4877-bcb4-701d57165495","Type":"ContainerDied","Data":"4b33092e42b0328d0fa22229ed7f206f1d4f098b3756047293dd64c3d91c97b7"} Feb 19 09:56:40 crc kubenswrapper[4873]: I0219 09:56:40.087997 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b33092e42b0328d0fa22229ed7f206f1d4f098b3756047293dd64c3d91c97b7" Feb 19 09:56:40 crc kubenswrapper[4873]: I0219 09:56:40.087859 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.240164 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.240822 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.992281 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6897955989-f6tl8"] Feb 19 09:56:48 crc kubenswrapper[4873]: E0219 09:56:48.992568 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="pull" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.992588 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="pull" Feb 19 09:56:48 crc kubenswrapper[4873]: E0219 09:56:48.992601 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="util" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.992609 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="util" Feb 19 09:56:48 crc kubenswrapper[4873]: E0219 09:56:48.992618 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="extract" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.992626 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="extract" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.992764 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a09955d-14f6-4877-bcb4-701d57165495" containerName="extract" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.993307 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.996815 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 09:56:48 crc kubenswrapper[4873]: I0219 09:56:48.999522 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.000076 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.000157 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.004776 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jxtk7" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.006202 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6897955989-f6tl8"] Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.047860 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdql\" (UniqueName: \"kubernetes.io/projected/94f344cf-0f09-4812-ab40-dcce7f260a53-kube-api-access-fpdql\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.047968 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-webhook-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.048013 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-apiservice-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.149039 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-apiservice-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.149328 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdql\" (UniqueName: \"kubernetes.io/projected/94f344cf-0f09-4812-ab40-dcce7f260a53-kube-api-access-fpdql\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.149458 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-webhook-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.155250 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-apiservice-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.160727 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94f344cf-0f09-4812-ab40-dcce7f260a53-webhook-cert\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.170859 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdql\" (UniqueName: \"kubernetes.io/projected/94f344cf-0f09-4812-ab40-dcce7f260a53-kube-api-access-fpdql\") pod \"metallb-operator-controller-manager-6897955989-f6tl8\" (UID: \"94f344cf-0f09-4812-ab40-dcce7f260a53\") " pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.252701 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph"] Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.253399 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.255911 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.255918 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.258229 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-87nwr" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.267116 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph"] Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.316446 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.454408 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.454806 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.454837 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2d4\" (UniqueName: \"kubernetes.io/projected/e9d29e18-f362-478f-911d-ed979e43aae1-kube-api-access-7x2d4\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.555782 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.555861 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.555893 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2d4\" (UniqueName: \"kubernetes.io/projected/e9d29e18-f362-478f-911d-ed979e43aae1-kube-api-access-7x2d4\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.570862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-webhook-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.570911 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9d29e18-f362-478f-911d-ed979e43aae1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.578481 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2d4\" (UniqueName: \"kubernetes.io/projected/e9d29e18-f362-478f-911d-ed979e43aae1-kube-api-access-7x2d4\") pod \"metallb-operator-webhook-server-7bf7457c95-rq2ph\" (UID: \"e9d29e18-f362-478f-911d-ed979e43aae1\") " pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.760318 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6897955989-f6tl8"] Feb 19 09:56:49 crc kubenswrapper[4873]: I0219 09:56:49.864972 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:50 crc kubenswrapper[4873]: I0219 09:56:50.082686 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph"] Feb 19 09:56:50 crc kubenswrapper[4873]: W0219 09:56:50.091384 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9d29e18_f362_478f_911d_ed979e43aae1.slice/crio-87879c343f83077b31a0aa3c0154e6ab3a202da933b8ad01c407ed97ec183fc0 WatchSource:0}: Error finding container 87879c343f83077b31a0aa3c0154e6ab3a202da933b8ad01c407ed97ec183fc0: Status 404 returned error can't find the container with id 87879c343f83077b31a0aa3c0154e6ab3a202da933b8ad01c407ed97ec183fc0 Feb 19 09:56:50 crc kubenswrapper[4873]: I0219 09:56:50.208068 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" event={"ID":"e9d29e18-f362-478f-911d-ed979e43aae1","Type":"ContainerStarted","Data":"87879c343f83077b31a0aa3c0154e6ab3a202da933b8ad01c407ed97ec183fc0"} Feb 19 09:56:50 crc kubenswrapper[4873]: I0219 09:56:50.211854 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" event={"ID":"94f344cf-0f09-4812-ab40-dcce7f260a53","Type":"ContainerStarted","Data":"4a9bec928bfcffce56b0f6b8d2e124e7b036c64d825f404f8f14d209c14f6b27"} Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.266502 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" event={"ID":"e9d29e18-f362-478f-911d-ed979e43aae1","Type":"ContainerStarted","Data":"3792233b357b79b987ea2bc0bbf51e4aad7dca62813d47d9c7c935e8601a1025"} Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.267309 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.268425 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" event={"ID":"94f344cf-0f09-4812-ab40-dcce7f260a53","Type":"ContainerStarted","Data":"f55b046ccc9003b9acd2ec56d44041b275f3b82b6a6645715b6885bf0876fa8a"} Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.268578 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.289020 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" podStartSLOduration=1.761949921 podStartE2EDuration="8.2890002s" podCreationTimestamp="2026-02-19 09:56:49 +0000 UTC" firstStartedPulling="2026-02-19 09:56:50.094806531 +0000 UTC m=+719.384238169" lastFinishedPulling="2026-02-19 09:56:56.62185679 +0000 UTC m=+725.911288448" observedRunningTime="2026-02-19 09:56:57.28697051 +0000 UTC m=+726.576402148" watchObservedRunningTime="2026-02-19 09:56:57.2890002 +0000 UTC m=+726.578431838" Feb 19 09:56:57 crc kubenswrapper[4873]: I0219 09:56:57.311227 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" podStartSLOduration=2.482902547 podStartE2EDuration="9.311207424s" podCreationTimestamp="2026-02-19 09:56:48 +0000 UTC" firstStartedPulling="2026-02-19 09:56:49.773745788 +0000 UTC m=+719.063177426" lastFinishedPulling="2026-02-19 09:56:56.602050655 +0000 UTC m=+725.891482303" observedRunningTime="2026-02-19 09:56:57.307942164 +0000 UTC m=+726.597373832" watchObservedRunningTime="2026-02-19 09:56:57.311207424 +0000 UTC m=+726.600639062" Feb 19 09:57:09 crc kubenswrapper[4873]: I0219 09:57:09.887354 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bf7457c95-rq2ph" Feb 19 09:57:18 crc kubenswrapper[4873]: I0219 09:57:18.240796 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:57:18 crc kubenswrapper[4873]: I0219 09:57:18.241342 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:57:24 crc kubenswrapper[4873]: I0219 09:57:24.354640 4873 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 09:57:29 crc kubenswrapper[4873]: I0219 09:57:29.320080 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6897955989-f6tl8" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.052029 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-w8fjg"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.054737 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.057918 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.057957 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.058287 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gllz7" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.060979 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.062332 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.065466 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070480 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-conf\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070587 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-reloader\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070694 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-startup\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d8f9aee-601f-4530-876b-83709311196b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070776 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-sockets\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070849 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nj2\" (UniqueName: \"kubernetes.io/projected/76ea40c9-c4a3-4a32-82a5-d725a73db80d-kube-api-access-29nj2\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070899 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.070939 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lg6\" (UniqueName: \"kubernetes.io/projected/8d8f9aee-601f-4530-876b-83709311196b-kube-api-access-h9lg6\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.084842 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.148746 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-phsr6"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.149687 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.152261 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-96f8q" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.152718 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.152798 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.152932 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.164062 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-7t964"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.165007 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.167037 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174322 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174364 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lg6\" (UniqueName: \"kubernetes.io/projected/8d8f9aee-601f-4530-876b-83709311196b-kube-api-access-h9lg6\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174389 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-conf\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174415 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174432 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-reloader\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174454 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-startup\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174488 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d8f9aee-601f-4530-876b-83709311196b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174503 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-sockets\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174537 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29nj2\" (UniqueName: \"kubernetes.io/projected/76ea40c9-c4a3-4a32-82a5-d725a73db80d-kube-api-access-29nj2\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.174793 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7t964"] Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.174863 4873 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.174900 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs podName:76ea40c9-c4a3-4a32-82a5-d725a73db80d nodeName:}" failed. No retries permitted until 2026-02-19 09:57:30.674887072 +0000 UTC m=+759.964318710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs") pod "frr-k8s-w8fjg" (UID: "76ea40c9-c4a3-4a32-82a5-d725a73db80d") : secret "frr-k8s-certs-secret" not found Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.175745 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-conf\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.175917 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.176089 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-reloader\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.176881 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-startup\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.180471 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76ea40c9-c4a3-4a32-82a5-d725a73db80d-frr-sockets\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.192801 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d8f9aee-601f-4530-876b-83709311196b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.202787 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lg6\" (UniqueName: \"kubernetes.io/projected/8d8f9aee-601f-4530-876b-83709311196b-kube-api-access-h9lg6\") pod \"frr-k8s-webhook-server-78b44bf5bb-xwr52\" (UID: \"8d8f9aee-601f-4530-876b-83709311196b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.204335 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29nj2\" (UniqueName: \"kubernetes.io/projected/76ea40c9-c4a3-4a32-82a5-d725a73db80d-kube-api-access-29nj2\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276246 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276545 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxtd\" (UniqueName: \"kubernetes.io/projected/46cac2a1-6c87-4c4e-a73f-92dbee290015-kube-api-access-gdxtd\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276655 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-metrics-certs\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276700 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46cac2a1-6c87-4c4e-a73f-92dbee290015-metallb-excludel2\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276765 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-metrics-certs\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276892 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-cert\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.276927 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhr5\" (UniqueName: \"kubernetes.io/projected/4a42b4a3-c207-40a8-80b9-0532a0ec2865-kube-api-access-5zhr5\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.378002 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.379018 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxtd\" (UniqueName: \"kubernetes.io/projected/46cac2a1-6c87-4c4e-a73f-92dbee290015-kube-api-access-gdxtd\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.379165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-metrics-certs\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.379318 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46cac2a1-6c87-4c4e-a73f-92dbee290015-metallb-excludel2\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.378249 4873 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.380406 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist podName:46cac2a1-6c87-4c4e-a73f-92dbee290015 nodeName:}" failed. No retries permitted until 2026-02-19 09:57:30.880390971 +0000 UTC m=+760.169822609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist") pod "speaker-phsr6" (UID: "46cac2a1-6c87-4c4e-a73f-92dbee290015") : secret "metallb-memberlist" not found Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.380141 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/46cac2a1-6c87-4c4e-a73f-92dbee290015-metallb-excludel2\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.380281 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-metrics-certs\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.380672 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhr5\" (UniqueName: \"kubernetes.io/projected/4a42b4a3-c207-40a8-80b9-0532a0ec2865-kube-api-access-5zhr5\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.380813 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-cert\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.383150 4873 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.385192 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-metrics-certs\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.385759 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-metrics-certs\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.386929 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.393815 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a42b4a3-c207-40a8-80b9-0532a0ec2865-cert\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.397450 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxtd\" (UniqueName: \"kubernetes.io/projected/46cac2a1-6c87-4c4e-a73f-92dbee290015-kube-api-access-gdxtd\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.408146 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhr5\" (UniqueName: \"kubernetes.io/projected/4a42b4a3-c207-40a8-80b9-0532a0ec2865-kube-api-access-5zhr5\") pod \"controller-69bbfbf88f-7t964\" (UID: \"4a42b4a3-c207-40a8-80b9-0532a0ec2865\") " pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.527279 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.730935 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.734979 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76ea40c9-c4a3-4a32-82a5-d725a73db80d-metrics-certs\") pod \"frr-k8s-w8fjg\" (UID: \"76ea40c9-c4a3-4a32-82a5-d725a73db80d\") " pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.748171 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7t964"] Feb 19 09:57:30 crc kubenswrapper[4873]: W0219 09:57:30.866183 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d8f9aee_601f_4530_876b_83709311196b.slice/crio-50fa477cf0ef00f5089eaa25ffc1a1d7d3e686d348d7cf849b1f4e4a207a301d WatchSource:0}: Error finding container 50fa477cf0ef00f5089eaa25ffc1a1d7d3e686d348d7cf849b1f4e4a207a301d: Status 404 returned error can't find the container with id 50fa477cf0ef00f5089eaa25ffc1a1d7d3e686d348d7cf849b1f4e4a207a301d Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.869801 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52"] Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.934699 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.934915 4873 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 09:57:30 crc kubenswrapper[4873]: E0219 09:57:30.935466 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist podName:46cac2a1-6c87-4c4e-a73f-92dbee290015 nodeName:}" failed. No retries permitted until 2026-02-19 09:57:31.935433301 +0000 UTC m=+761.224864989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist") pod "speaker-phsr6" (UID: "46cac2a1-6c87-4c4e-a73f-92dbee290015") : secret "metallb-memberlist" not found Feb 19 09:57:30 crc kubenswrapper[4873]: I0219 09:57:30.977316 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.563968 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"502eb2e43f2f3a9f6d40162f61e2e3f680191c4f51f81e6de07fb81e9d863c57"} Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.565709 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" event={"ID":"8d8f9aee-601f-4530-876b-83709311196b","Type":"ContainerStarted","Data":"50fa477cf0ef00f5089eaa25ffc1a1d7d3e686d348d7cf849b1f4e4a207a301d"} Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.567674 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7t964" event={"ID":"4a42b4a3-c207-40a8-80b9-0532a0ec2865","Type":"ContainerStarted","Data":"3107787efea3747ddf6f5c4e0ed52d8064c25c5a3b90cc9169d16252e2ff64cb"} Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.567741 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7t964" event={"ID":"4a42b4a3-c207-40a8-80b9-0532a0ec2865","Type":"ContainerStarted","Data":"9d1d12b0aa07c5f612100450fdcfc53911a4e1a3cd5b946085a91fd7c88ca5ea"} Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.567761 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7t964" event={"ID":"4a42b4a3-c207-40a8-80b9-0532a0ec2865","Type":"ContainerStarted","Data":"1fe7e3152bc984605a86bca2cd2b9adacb5f65a6a2d4ef9f8b5fd3455326a661"} Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.568086 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.593024 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-7t964" podStartSLOduration=1.593003865 podStartE2EDuration="1.593003865s" podCreationTimestamp="2026-02-19 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:57:31.589993971 +0000 UTC m=+760.879425649" watchObservedRunningTime="2026-02-19 09:57:31.593003865 +0000 UTC m=+760.882435533" Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.948342 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:31 crc kubenswrapper[4873]: I0219 09:57:31.974952 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/46cac2a1-6c87-4c4e-a73f-92dbee290015-memberlist\") pod \"speaker-phsr6\" (UID: \"46cac2a1-6c87-4c4e-a73f-92dbee290015\") " pod="metallb-system/speaker-phsr6" Feb 19 09:57:32 crc kubenswrapper[4873]: I0219 09:57:32.262039 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-phsr6" Feb 19 09:57:32 crc kubenswrapper[4873]: I0219 09:57:32.578815 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-phsr6" event={"ID":"46cac2a1-6c87-4c4e-a73f-92dbee290015","Type":"ContainerStarted","Data":"50341f7b27bf4e167ea6496da28c18490963bf3025a411a59219ecb858afecef"} Feb 19 09:57:33 crc kubenswrapper[4873]: I0219 09:57:33.593247 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-phsr6" event={"ID":"46cac2a1-6c87-4c4e-a73f-92dbee290015","Type":"ContainerStarted","Data":"4b7f8ef06662077054851d192bdf57df18e6a9b7678003895ef6cc22216f8e4c"} Feb 19 09:57:33 crc kubenswrapper[4873]: I0219 09:57:33.593713 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-phsr6" event={"ID":"46cac2a1-6c87-4c4e-a73f-92dbee290015","Type":"ContainerStarted","Data":"80d45fb6c8582285aad835bf9aa4b8086c2108b74e51ea444c6769f7e66cf7cc"} Feb 19 09:57:33 crc kubenswrapper[4873]: I0219 09:57:33.593763 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-phsr6" Feb 19 09:57:33 crc kubenswrapper[4873]: I0219 09:57:33.619758 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-phsr6" podStartSLOduration=3.619741033 podStartE2EDuration="3.619741033s" podCreationTimestamp="2026-02-19 09:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:57:33.618039172 +0000 UTC m=+762.907470810" watchObservedRunningTime="2026-02-19 09:57:33.619741033 +0000 UTC m=+762.909172671" Feb 19 09:57:38 crc kubenswrapper[4873]: I0219 09:57:38.640251 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" event={"ID":"8d8f9aee-601f-4530-876b-83709311196b","Type":"ContainerStarted","Data":"27db045438f34c265d79c2757d753ddcad6e64945bfa017dffe283c5406f064c"} Feb 19 09:57:38 crc kubenswrapper[4873]: I0219 09:57:38.640918 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:38 crc kubenswrapper[4873]: I0219 09:57:38.642640 4873 generic.go:334] "Generic (PLEG): container finished" podID="76ea40c9-c4a3-4a32-82a5-d725a73db80d" containerID="d91624a39ed8612251e78f0a46d737d8e6dae41106c6b038ee3149ba85b509d4" exitCode=0 Feb 19 09:57:38 crc kubenswrapper[4873]: I0219 09:57:38.642884 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerDied","Data":"d91624a39ed8612251e78f0a46d737d8e6dae41106c6b038ee3149ba85b509d4"} Feb 19 09:57:38 crc kubenswrapper[4873]: I0219 09:57:38.677500 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" podStartSLOduration=1.8214987580000002 podStartE2EDuration="8.677480613s" podCreationTimestamp="2026-02-19 09:57:30 +0000 UTC" firstStartedPulling="2026-02-19 09:57:30.86933551 +0000 UTC m=+760.158767168" lastFinishedPulling="2026-02-19 09:57:37.725317385 +0000 UTC m=+767.014749023" observedRunningTime="2026-02-19 09:57:38.676668783 +0000 UTC m=+767.966100431" watchObservedRunningTime="2026-02-19 09:57:38.677480613 +0000 UTC m=+767.966912251" Feb 19 09:57:39 crc kubenswrapper[4873]: I0219 09:57:39.652615 4873 generic.go:334] "Generic (PLEG): container finished" podID="76ea40c9-c4a3-4a32-82a5-d725a73db80d" containerID="b8c21074afcf79a25db06d86dc625760db617720217f370bd95bfbd4acba9b3d" exitCode=0 Feb 19 09:57:39 crc kubenswrapper[4873]: I0219 09:57:39.652730 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerDied","Data":"b8c21074afcf79a25db06d86dc625760db617720217f370bd95bfbd4acba9b3d"} Feb 19 09:57:40 crc kubenswrapper[4873]: I0219 09:57:40.532241 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-7t964" Feb 19 09:57:40 crc kubenswrapper[4873]: I0219 09:57:40.661929 4873 generic.go:334] "Generic (PLEG): container finished" podID="76ea40c9-c4a3-4a32-82a5-d725a73db80d" containerID="392986b6ec7137e2168e32255030f078fcefdf7c4c29560fdb67923521859075" exitCode=0 Feb 19 09:57:40 crc kubenswrapper[4873]: I0219 09:57:40.661977 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerDied","Data":"392986b6ec7137e2168e32255030f078fcefdf7c4c29560fdb67923521859075"} Feb 19 09:57:41 crc kubenswrapper[4873]: I0219 09:57:41.671640 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"0b85d63bbb9c52786ce94d20e2066ebd5e26704d074f3fa60b6cb61f657ea5cc"} Feb 19 09:57:41 crc kubenswrapper[4873]: I0219 09:57:41.671954 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"73f780833dc2cd9c339c87a5e837f4c9cc30253ce6d843aa55b8ae0d4928323e"} Feb 19 09:57:41 crc kubenswrapper[4873]: I0219 09:57:41.671963 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"afa4c1c85a3d2b87412bc6722f0f313aae7eeafb3d859b3acb2335770f4ccdc1"} Feb 19 09:57:41 crc kubenswrapper[4873]: I0219 09:57:41.671973 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"f1a7c0ccde0b78a20f60db7550932b73ef0837ed0d528898e01440ec1e8f1d84"} Feb 19 09:57:41 crc kubenswrapper[4873]: I0219 09:57:41.671980 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"c6843e5d224cb646753fa10b2195544eae84f655d977eaa287a3dab21b1c9cd6"} Feb 19 09:57:42 crc kubenswrapper[4873]: I0219 09:57:42.268503 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-phsr6" Feb 19 09:57:42 crc kubenswrapper[4873]: I0219 09:57:42.686558 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8fjg" event={"ID":"76ea40c9-c4a3-4a32-82a5-d725a73db80d","Type":"ContainerStarted","Data":"6d35937371da92a07d0cb6211844c14f05a4de8ba2b7938f33a0e3bdb0a9e287"} Feb 19 09:57:42 crc kubenswrapper[4873]: I0219 09:57:42.686924 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:42 crc kubenswrapper[4873]: I0219 09:57:42.724935 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-w8fjg" podStartSLOduration=6.043018214 podStartE2EDuration="12.724741986s" podCreationTimestamp="2026-02-19 09:57:30 +0000 UTC" firstStartedPulling="2026-02-19 09:57:31.06019557 +0000 UTC m=+760.349627208" lastFinishedPulling="2026-02-19 09:57:37.741919342 +0000 UTC m=+767.031350980" observedRunningTime="2026-02-19 09:57:42.723095526 +0000 UTC m=+772.012527174" watchObservedRunningTime="2026-02-19 09:57:42.724741986 +0000 UTC m=+772.014173624" Feb 19 09:57:44 crc kubenswrapper[4873]: I0219 09:57:44.942980 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:44 crc kubenswrapper[4873]: I0219 09:57:44.944433 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:44 crc kubenswrapper[4873]: I0219 09:57:44.947196 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nl9dk" Feb 19 09:57:44 crc kubenswrapper[4873]: I0219 09:57:44.948313 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 09:57:44 crc kubenswrapper[4873]: I0219 09:57:44.952300 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.012369 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.033348 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swp6m\" (UniqueName: \"kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m\") pod \"openstack-operator-index-fv6j2\" (UID: \"4c48262d-0e66-4844-95f7-1e8daf0d1acb\") " pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.134538 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swp6m\" (UniqueName: \"kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m\") pod \"openstack-operator-index-fv6j2\" (UID: \"4c48262d-0e66-4844-95f7-1e8daf0d1acb\") " pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.161081 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swp6m\" (UniqueName: \"kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m\") pod \"openstack-operator-index-fv6j2\" (UID: \"4c48262d-0e66-4844-95f7-1e8daf0d1acb\") " pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.265958 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.535455 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:45 crc kubenswrapper[4873]: W0219 09:57:45.542495 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c48262d_0e66_4844_95f7_1e8daf0d1acb.slice/crio-5cec17efc1266c507d6d6ad9faa7f6a18e52943217e6723af9b572fe489c2d56 WatchSource:0}: Error finding container 5cec17efc1266c507d6d6ad9faa7f6a18e52943217e6723af9b572fe489c2d56: Status 404 returned error can't find the container with id 5cec17efc1266c507d6d6ad9faa7f6a18e52943217e6723af9b572fe489c2d56 Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.714356 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv6j2" event={"ID":"4c48262d-0e66-4844-95f7-1e8daf0d1acb","Type":"ContainerStarted","Data":"5cec17efc1266c507d6d6ad9faa7f6a18e52943217e6723af9b572fe489c2d56"} Feb 19 09:57:45 crc kubenswrapper[4873]: I0219 09:57:45.977989 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:46 crc kubenswrapper[4873]: I0219 09:57:46.024944 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.240747 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.241139 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.241204 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.241987 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.242081 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663" gracePeriod=600 Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.319944 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.744609 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663" exitCode=0 Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.744705 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663"} Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.744766 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded"} Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.744793 4873 scope.go:117] "RemoveContainer" containerID="ebff3f80b0b9d54ded2014067bb39816bc67366aec6359774e3b0cd08dfce552" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.749260 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv6j2" event={"ID":"4c48262d-0e66-4844-95f7-1e8daf0d1acb","Type":"ContainerStarted","Data":"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e"} Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.796459 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fv6j2" podStartSLOduration=2.546975459 podStartE2EDuration="4.796390858s" podCreationTimestamp="2026-02-19 09:57:44 +0000 UTC" firstStartedPulling="2026-02-19 09:57:45.546219051 +0000 UTC m=+774.835650699" lastFinishedPulling="2026-02-19 09:57:47.79563446 +0000 UTC m=+777.085066098" observedRunningTime="2026-02-19 09:57:48.794587944 +0000 UTC m=+778.084019622" watchObservedRunningTime="2026-02-19 09:57:48.796390858 +0000 UTC m=+778.085822526" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.931747 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-p62rb"] Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.933216 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:48 crc kubenswrapper[4873]: I0219 09:57:48.953651 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p62rb"] Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.108453 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjgp8\" (UniqueName: \"kubernetes.io/projected/0144fe1c-ef13-4b4e-8cda-ddc72e2516bb-kube-api-access-hjgp8\") pod \"openstack-operator-index-p62rb\" (UID: \"0144fe1c-ef13-4b4e-8cda-ddc72e2516bb\") " pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.209614 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjgp8\" (UniqueName: \"kubernetes.io/projected/0144fe1c-ef13-4b4e-8cda-ddc72e2516bb-kube-api-access-hjgp8\") pod \"openstack-operator-index-p62rb\" (UID: \"0144fe1c-ef13-4b4e-8cda-ddc72e2516bb\") " pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.242078 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjgp8\" (UniqueName: \"kubernetes.io/projected/0144fe1c-ef13-4b4e-8cda-ddc72e2516bb-kube-api-access-hjgp8\") pod \"openstack-operator-index-p62rb\" (UID: \"0144fe1c-ef13-4b4e-8cda-ddc72e2516bb\") " pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.293712 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.556056 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p62rb"] Feb 19 09:57:49 crc kubenswrapper[4873]: W0219 09:57:49.564061 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0144fe1c_ef13_4b4e_8cda_ddc72e2516bb.slice/crio-b4b01b5de11f95b1a6872a6bcdaeabcf004b864462980bdd22ff853a5962f07b WatchSource:0}: Error finding container b4b01b5de11f95b1a6872a6bcdaeabcf004b864462980bdd22ff853a5962f07b: Status 404 returned error can't find the container with id b4b01b5de11f95b1a6872a6bcdaeabcf004b864462980bdd22ff853a5962f07b Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.758585 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p62rb" event={"ID":"0144fe1c-ef13-4b4e-8cda-ddc72e2516bb","Type":"ContainerStarted","Data":"b4b01b5de11f95b1a6872a6bcdaeabcf004b864462980bdd22ff853a5962f07b"} Feb 19 09:57:49 crc kubenswrapper[4873]: I0219 09:57:49.761559 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fv6j2" podUID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" containerName="registry-server" containerID="cri-o://e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e" gracePeriod=2 Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.144936 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.329658 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swp6m\" (UniqueName: \"kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m\") pod \"4c48262d-0e66-4844-95f7-1e8daf0d1acb\" (UID: \"4c48262d-0e66-4844-95f7-1e8daf0d1acb\") " Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.337725 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m" (OuterVolumeSpecName: "kube-api-access-swp6m") pod "4c48262d-0e66-4844-95f7-1e8daf0d1acb" (UID: "4c48262d-0e66-4844-95f7-1e8daf0d1acb"). InnerVolumeSpecName "kube-api-access-swp6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.398624 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xwr52" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.431889 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swp6m\" (UniqueName: \"kubernetes.io/projected/4c48262d-0e66-4844-95f7-1e8daf0d1acb-kube-api-access-swp6m\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.771587 4873 generic.go:334] "Generic (PLEG): container finished" podID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" containerID="e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e" exitCode=0 Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.771634 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv6j2" event={"ID":"4c48262d-0e66-4844-95f7-1e8daf0d1acb","Type":"ContainerDied","Data":"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e"} Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.771661 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv6j2" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.771696 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv6j2" event={"ID":"4c48262d-0e66-4844-95f7-1e8daf0d1acb","Type":"ContainerDied","Data":"5cec17efc1266c507d6d6ad9faa7f6a18e52943217e6723af9b572fe489c2d56"} Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.771748 4873 scope.go:117] "RemoveContainer" containerID="e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.775286 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p62rb" event={"ID":"0144fe1c-ef13-4b4e-8cda-ddc72e2516bb","Type":"ContainerStarted","Data":"c04b22b737ca79accb8d481d3d61932e9ac82753c96c288882c4f598bc0dd02b"} Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.801254 4873 scope.go:117] "RemoveContainer" containerID="e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e" Feb 19 09:57:50 crc kubenswrapper[4873]: E0219 09:57:50.804416 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e\": container with ID starting with e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e not found: ID does not exist" containerID="e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.804531 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e"} err="failed to get container status \"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e\": rpc error: code = NotFound desc = could not find container \"e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e\": container with ID starting with e90eef3aacf035d791c4bf60c97b62867b98f358059e5b065938960b6a79ea5e not found: ID does not exist" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.815033 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-p62rb" podStartSLOduration=2.762398067 podStartE2EDuration="2.815011417s" podCreationTimestamp="2026-02-19 09:57:48 +0000 UTC" firstStartedPulling="2026-02-19 09:57:49.568210154 +0000 UTC m=+778.857641792" lastFinishedPulling="2026-02-19 09:57:49.620823484 +0000 UTC m=+778.910255142" observedRunningTime="2026-02-19 09:57:50.796606816 +0000 UTC m=+780.086038454" watchObservedRunningTime="2026-02-19 09:57:50.815011417 +0000 UTC m=+780.104443055" Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.819619 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.823575 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fv6j2"] Feb 19 09:57:50 crc kubenswrapper[4873]: I0219 09:57:50.981083 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-w8fjg" Feb 19 09:57:51 crc kubenswrapper[4873]: I0219 09:57:51.495336 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" path="/var/lib/kubelet/pods/4c48262d-0e66-4844-95f7-1e8daf0d1acb/volumes" Feb 19 09:57:59 crc kubenswrapper[4873]: I0219 09:57:59.294325 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:59 crc kubenswrapper[4873]: I0219 09:57:59.295041 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:59 crc kubenswrapper[4873]: I0219 09:57:59.336677 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:57:59 crc kubenswrapper[4873]: I0219 09:57:59.880223 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-p62rb" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.279757 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6"] Feb 19 09:58:06 crc kubenswrapper[4873]: E0219 09:58:06.280351 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" containerName="registry-server" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.280367 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" containerName="registry-server" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.280519 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c48262d-0e66-4844-95f7-1e8daf0d1acb" containerName="registry-server" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.281695 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.285019 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-58jgl" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.293365 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6"] Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.468848 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.468964 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.469012 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9v8z\" (UniqueName: \"kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.570854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.570965 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.571057 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9v8z\" (UniqueName: \"kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.571829 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.572239 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.607514 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9v8z\" (UniqueName: \"kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z\") pod \"8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.608182 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.861955 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6"] Feb 19 09:58:06 crc kubenswrapper[4873]: W0219 09:58:06.872829 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78582e6c_dedc_4608_a542_6837184954ab.slice/crio-8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e WatchSource:0}: Error finding container 8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e: Status 404 returned error can't find the container with id 8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e Feb 19 09:58:06 crc kubenswrapper[4873]: I0219 09:58:06.898438 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" event={"ID":"78582e6c-dedc-4608-a542-6837184954ab","Type":"ContainerStarted","Data":"8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e"} Feb 19 09:58:07 crc kubenswrapper[4873]: I0219 09:58:07.909872 4873 generic.go:334] "Generic (PLEG): container finished" podID="78582e6c-dedc-4608-a542-6837184954ab" containerID="041b2406b02a94c54e85ab5c62fae0d9c8bb4d656ef6b673d5b415f5bffe8768" exitCode=0 Feb 19 09:58:07 crc kubenswrapper[4873]: I0219 09:58:07.909927 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" event={"ID":"78582e6c-dedc-4608-a542-6837184954ab","Type":"ContainerDied","Data":"041b2406b02a94c54e85ab5c62fae0d9c8bb4d656ef6b673d5b415f5bffe8768"} Feb 19 09:58:08 crc kubenswrapper[4873]: I0219 09:58:08.920799 4873 generic.go:334] "Generic (PLEG): container finished" podID="78582e6c-dedc-4608-a542-6837184954ab" containerID="26612d1be40fb7f50e8c040bf74b4074bc70885f62408f30440ce32acc781d7b" exitCode=0 Feb 19 09:58:08 crc kubenswrapper[4873]: I0219 09:58:08.920878 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" event={"ID":"78582e6c-dedc-4608-a542-6837184954ab","Type":"ContainerDied","Data":"26612d1be40fb7f50e8c040bf74b4074bc70885f62408f30440ce32acc781d7b"} Feb 19 09:58:09 crc kubenswrapper[4873]: I0219 09:58:09.930587 4873 generic.go:334] "Generic (PLEG): container finished" podID="78582e6c-dedc-4608-a542-6837184954ab" containerID="a4143a0970bf1987ea6ea034b639e03cf1f08f5db13229767843b84977f5b054" exitCode=0 Feb 19 09:58:09 crc kubenswrapper[4873]: I0219 09:58:09.930693 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" event={"ID":"78582e6c-dedc-4608-a542-6837184954ab","Type":"ContainerDied","Data":"a4143a0970bf1987ea6ea034b639e03cf1f08f5db13229767843b84977f5b054"} Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.387408 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.556642 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9v8z\" (UniqueName: \"kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z\") pod \"78582e6c-dedc-4608-a542-6837184954ab\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.556744 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle\") pod \"78582e6c-dedc-4608-a542-6837184954ab\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.556966 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util\") pod \"78582e6c-dedc-4608-a542-6837184954ab\" (UID: \"78582e6c-dedc-4608-a542-6837184954ab\") " Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.557706 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle" (OuterVolumeSpecName: "bundle") pod "78582e6c-dedc-4608-a542-6837184954ab" (UID: "78582e6c-dedc-4608-a542-6837184954ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.563262 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z" (OuterVolumeSpecName: "kube-api-access-h9v8z") pod "78582e6c-dedc-4608-a542-6837184954ab" (UID: "78582e6c-dedc-4608-a542-6837184954ab"). InnerVolumeSpecName "kube-api-access-h9v8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.576280 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util" (OuterVolumeSpecName: "util") pod "78582e6c-dedc-4608-a542-6837184954ab" (UID: "78582e6c-dedc-4608-a542-6837184954ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.658940 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9v8z\" (UniqueName: \"kubernetes.io/projected/78582e6c-dedc-4608-a542-6837184954ab-kube-api-access-h9v8z\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.659063 4873 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.659083 4873 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78582e6c-dedc-4608-a542-6837184954ab-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.946986 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" event={"ID":"78582e6c-dedc-4608-a542-6837184954ab","Type":"ContainerDied","Data":"8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e"} Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.947037 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8acf5b88ea8a6168e48f70276b84b945ce041f83c51ac8f7faf5f8080114094e" Feb 19 09:58:11 crc kubenswrapper[4873]: I0219 09:58:11.947069 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.476654 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx"] Feb 19 09:58:18 crc kubenswrapper[4873]: E0219 09:58:18.477309 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="extract" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.477321 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="extract" Feb 19 09:58:18 crc kubenswrapper[4873]: E0219 09:58:18.477334 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="pull" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.477340 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="pull" Feb 19 09:58:18 crc kubenswrapper[4873]: E0219 09:58:18.477359 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="util" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.477369 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="util" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.477473 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="78582e6c-dedc-4608-a542-6837184954ab" containerName="extract" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.477882 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.479740 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-r5vxp" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.496629 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx"] Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.569708 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgdg5\" (UniqueName: \"kubernetes.io/projected/e18b6851-e022-488e-bd95-27d1659f2761-kube-api-access-vgdg5\") pod \"openstack-operator-controller-init-8476bb6847-rv4sx\" (UID: \"e18b6851-e022-488e-bd95-27d1659f2761\") " pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.671288 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgdg5\" (UniqueName: \"kubernetes.io/projected/e18b6851-e022-488e-bd95-27d1659f2761-kube-api-access-vgdg5\") pod \"openstack-operator-controller-init-8476bb6847-rv4sx\" (UID: \"e18b6851-e022-488e-bd95-27d1659f2761\") " pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.693935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgdg5\" (UniqueName: \"kubernetes.io/projected/e18b6851-e022-488e-bd95-27d1659f2761-kube-api-access-vgdg5\") pod \"openstack-operator-controller-init-8476bb6847-rv4sx\" (UID: \"e18b6851-e022-488e-bd95-27d1659f2761\") " pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:18 crc kubenswrapper[4873]: I0219 09:58:18.794617 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:19 crc kubenswrapper[4873]: I0219 09:58:19.026420 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx"] Feb 19 09:58:20 crc kubenswrapper[4873]: I0219 09:58:20.008236 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" event={"ID":"e18b6851-e022-488e-bd95-27d1659f2761","Type":"ContainerStarted","Data":"4f9ee02448624ca92a305341318ceb217afb0fb07dc22351c5e3bf240bb856c9"} Feb 19 09:58:24 crc kubenswrapper[4873]: I0219 09:58:24.040808 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" event={"ID":"e18b6851-e022-488e-bd95-27d1659f2761","Type":"ContainerStarted","Data":"94ba8805ea630d1c4aa4bf2bcd537d207f79163d5a6d8dda395f986aa25f179a"} Feb 19 09:58:24 crc kubenswrapper[4873]: I0219 09:58:24.041145 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:58:24 crc kubenswrapper[4873]: I0219 09:58:24.090433 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" podStartSLOduration=2.111480418 podStartE2EDuration="6.090399956s" podCreationTimestamp="2026-02-19 09:58:18 +0000 UTC" firstStartedPulling="2026-02-19 09:58:19.0418682 +0000 UTC m=+808.331299838" lastFinishedPulling="2026-02-19 09:58:23.020787738 +0000 UTC m=+812.310219376" observedRunningTime="2026-02-19 09:58:24.081769544 +0000 UTC m=+813.371201202" watchObservedRunningTime="2026-02-19 09:58:24.090399956 +0000 UTC m=+813.379831664" Feb 19 09:58:28 crc kubenswrapper[4873]: I0219 09:58:28.799053 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8476bb6847-rv4sx" Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.946834 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw"] Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.948089 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.961935 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw"] Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.971688 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf"] Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.972762 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.973289 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ssclt" Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.986814 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jnv7r" Feb 19 09:59:06 crc kubenswrapper[4873]: I0219 09:59:06.989937 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.018651 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.022695 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.033264 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dsdww" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.035489 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.037932 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.063703 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8n42n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.064958 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.078517 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.082471 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.083571 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.087484 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lz99m" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.094404 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.101381 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.102210 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.105981 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-24xwf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.119385 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.131078 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4t46s"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.131992 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.138517 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4t46s"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.142186 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.142202 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4l99v" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.143159 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpc8l\" (UniqueName: \"kubernetes.io/projected/d53d2bae-fcdd-408c-9950-440e841cc035-kube-api-access-jpc8l\") pod \"barbican-operator-controller-manager-868647ff47-hqmvw\" (UID: \"d53d2bae-fcdd-408c-9950-440e841cc035\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.143232 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsp8\" (UniqueName: \"kubernetes.io/projected/43531003-74d3-43b9-b0f5-6fca42b21975-kube-api-access-tvsp8\") pod \"glance-operator-controller-manager-77987464f4-vgxsl\" (UID: \"43531003-74d3-43b9-b0f5-6fca42b21975\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.143264 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2n2\" (UniqueName: \"kubernetes.io/projected/2e7ca3f2-f73b-4bac-93bb-68b2518d956e-kube-api-access-lm2n2\") pod \"cinder-operator-controller-manager-5d946d989d-cx7xf\" (UID: \"2e7ca3f2-f73b-4bac-93bb-68b2518d956e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.143317 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bl4\" (UniqueName: \"kubernetes.io/projected/f108f6ea-4506-48bf-b948-e367078c3dce-kube-api-access-c9bl4\") pod \"designate-operator-controller-manager-6d8bf5c495-t54x9\" (UID: \"f108f6ea-4506-48bf-b948-e367078c3dce\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.158495 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.159724 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.163142 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.164552 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.165828 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2fmd9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.167075 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-96ft2" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.174329 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.198189 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.199081 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.204959 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-szjl5" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.205262 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.213778 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.222618 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.223430 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.229338 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xb8zh" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.237233 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.238397 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.243943 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rnfvw" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244156 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnkj\" (UniqueName: \"kubernetes.io/projected/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-kube-api-access-8hnkj\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244222 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpc8l\" (UniqueName: \"kubernetes.io/projected/d53d2bae-fcdd-408c-9950-440e841cc035-kube-api-access-jpc8l\") pod \"barbican-operator-controller-manager-868647ff47-hqmvw\" (UID: \"d53d2bae-fcdd-408c-9950-440e841cc035\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244278 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsp8\" (UniqueName: \"kubernetes.io/projected/43531003-74d3-43b9-b0f5-6fca42b21975-kube-api-access-tvsp8\") pod \"glance-operator-controller-manager-77987464f4-vgxsl\" (UID: \"43531003-74d3-43b9-b0f5-6fca42b21975\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244303 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2n2\" (UniqueName: \"kubernetes.io/projected/2e7ca3f2-f73b-4bac-93bb-68b2518d956e-kube-api-access-lm2n2\") pod \"cinder-operator-controller-manager-5d946d989d-cx7xf\" (UID: \"2e7ca3f2-f73b-4bac-93bb-68b2518d956e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244344 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd65\" (UniqueName: \"kubernetes.io/projected/e4172fa9-b04e-4894-82d6-ec65ea92b004-kube-api-access-fvd65\") pod \"manila-operator-controller-manager-54f6768c69-t2hfl\" (UID: \"e4172fa9-b04e-4894-82d6-ec65ea92b004\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244383 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk7gk\" (UniqueName: \"kubernetes.io/projected/ecf3484a-026e-4655-bfa8-e5292e2f62c5-kube-api-access-pk7gk\") pod \"keystone-operator-controller-manager-b4d948c87-t7mwr\" (UID: \"ecf3484a-026e-4655-bfa8-e5292e2f62c5\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244424 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bl4\" (UniqueName: \"kubernetes.io/projected/f108f6ea-4506-48bf-b948-e367078c3dce-kube-api-access-c9bl4\") pod \"designate-operator-controller-manager-6d8bf5c495-t54x9\" (UID: \"f108f6ea-4506-48bf-b948-e367078c3dce\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244455 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2nt\" (UniqueName: \"kubernetes.io/projected/8d4b6c84-e5ed-4761-b7c7-95b21da856f7-kube-api-access-fg2nt\") pod \"heat-operator-controller-manager-69f49c598c-vwx5n\" (UID: \"8d4b6c84-e5ed-4761-b7c7-95b21da856f7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244495 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.244514 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4px\" (UniqueName: \"kubernetes.io/projected/2b1c8872-b310-4994-819c-a8e472d8e522-kube-api-access-9p4px\") pod \"horizon-operator-controller-manager-5b9b8895d5-r9b5b\" (UID: \"2b1c8872-b310-4994-819c-a8e472d8e522\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.248392 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.249395 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.251753 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xtskv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.289715 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2n2\" (UniqueName: \"kubernetes.io/projected/2e7ca3f2-f73b-4bac-93bb-68b2518d956e-kube-api-access-lm2n2\") pod \"cinder-operator-controller-manager-5d946d989d-cx7xf\" (UID: \"2e7ca3f2-f73b-4bac-93bb-68b2518d956e\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.292232 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bl4\" (UniqueName: \"kubernetes.io/projected/f108f6ea-4506-48bf-b948-e367078c3dce-kube-api-access-c9bl4\") pod \"designate-operator-controller-manager-6d8bf5c495-t54x9\" (UID: \"f108f6ea-4506-48bf-b948-e367078c3dce\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.292731 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpc8l\" (UniqueName: \"kubernetes.io/projected/d53d2bae-fcdd-408c-9950-440e841cc035-kube-api-access-jpc8l\") pod \"barbican-operator-controller-manager-868647ff47-hqmvw\" (UID: \"d53d2bae-fcdd-408c-9950-440e841cc035\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.293510 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.296619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsp8\" (UniqueName: \"kubernetes.io/projected/43531003-74d3-43b9-b0f5-6fca42b21975-kube-api-access-tvsp8\") pod \"glance-operator-controller-manager-77987464f4-vgxsl\" (UID: \"43531003-74d3-43b9-b0f5-6fca42b21975\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.298335 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.304165 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.308199 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.309050 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.320576 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ft994" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.321745 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346335 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd65\" (UniqueName: \"kubernetes.io/projected/e4172fa9-b04e-4894-82d6-ec65ea92b004-kube-api-access-fvd65\") pod \"manila-operator-controller-manager-54f6768c69-t2hfl\" (UID: \"e4172fa9-b04e-4894-82d6-ec65ea92b004\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5r9q\" (UniqueName: \"kubernetes.io/projected/588098b3-662f-4f6f-914c-8cb28e055ccd-kube-api-access-j5r9q\") pod \"mariadb-operator-controller-manager-6994f66f48-8v7q6\" (UID: \"588098b3-662f-4f6f-914c-8cb28e055ccd\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346435 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk7gk\" (UniqueName: \"kubernetes.io/projected/ecf3484a-026e-4655-bfa8-e5292e2f62c5-kube-api-access-pk7gk\") pod \"keystone-operator-controller-manager-b4d948c87-t7mwr\" (UID: \"ecf3484a-026e-4655-bfa8-e5292e2f62c5\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346454 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7c9s\" (UniqueName: \"kubernetes.io/projected/aeccf47e-b953-4036-b271-be284b9ab385-kube-api-access-b7c9s\") pod \"ironic-operator-controller-manager-554564d7fc-f86jr\" (UID: \"aeccf47e-b953-4036-b271-be284b9ab385\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2nt\" (UniqueName: \"kubernetes.io/projected/8d4b6c84-e5ed-4761-b7c7-95b21da856f7-kube-api-access-fg2nt\") pod \"heat-operator-controller-manager-69f49c598c-vwx5n\" (UID: \"8d4b6c84-e5ed-4761-b7c7-95b21da856f7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346496 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d294k\" (UniqueName: \"kubernetes.io/projected/8eec8859-f388-4d81-bbce-0433a66a1ef7-kube-api-access-d294k\") pod \"nova-operator-controller-manager-567668f5cf-n6djt\" (UID: \"8eec8859-f388-4d81-bbce-0433a66a1ef7\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346515 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346533 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4px\" (UniqueName: \"kubernetes.io/projected/2b1c8872-b310-4994-819c-a8e472d8e522-kube-api-access-9p4px\") pod \"horizon-operator-controller-manager-5b9b8895d5-r9b5b\" (UID: \"2b1c8872-b310-4994-819c-a8e472d8e522\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346558 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnkj\" (UniqueName: \"kubernetes.io/projected/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-kube-api-access-8hnkj\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.346573 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flmp\" (UniqueName: \"kubernetes.io/projected/c471d099-fa02-4463-9eb9-9d0f6a3832e6-kube-api-access-2flmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-d6h72\" (UID: \"c471d099-fa02-4463-9eb9-9d0f6a3832e6\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.346859 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.346906 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:07.846889699 +0000 UTC m=+857.136321337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.348309 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.349031 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.351467 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zvctd" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.353974 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.354844 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.365093 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.365133 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bjf7v" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.368869 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4px\" (UniqueName: \"kubernetes.io/projected/2b1c8872-b310-4994-819c-a8e472d8e522-kube-api-access-9p4px\") pod \"horizon-operator-controller-manager-5b9b8895d5-r9b5b\" (UID: \"2b1c8872-b310-4994-819c-a8e472d8e522\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.371957 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk7gk\" (UniqueName: \"kubernetes.io/projected/ecf3484a-026e-4655-bfa8-e5292e2f62c5-kube-api-access-pk7gk\") pod \"keystone-operator-controller-manager-b4d948c87-t7mwr\" (UID: \"ecf3484a-026e-4655-bfa8-e5292e2f62c5\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.374077 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd65\" (UniqueName: \"kubernetes.io/projected/e4172fa9-b04e-4894-82d6-ec65ea92b004-kube-api-access-fvd65\") pod \"manila-operator-controller-manager-54f6768c69-t2hfl\" (UID: \"e4172fa9-b04e-4894-82d6-ec65ea92b004\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.384581 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.393530 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.399666 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnkj\" (UniqueName: \"kubernetes.io/projected/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-kube-api-access-8hnkj\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.401469 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2nt\" (UniqueName: \"kubernetes.io/projected/8d4b6c84-e5ed-4761-b7c7-95b21da856f7-kube-api-access-fg2nt\") pod \"heat-operator-controller-manager-69f49c598c-vwx5n\" (UID: \"8d4b6c84-e5ed-4761-b7c7-95b21da856f7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.413321 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.413495 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.421273 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.423303 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.425319 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.425803 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lk7k4" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.441407 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447356 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57sh\" (UniqueName: \"kubernetes.io/projected/dc53742c-7e71-49fa-9378-b26036c80275-kube-api-access-n57sh\") pod \"ovn-operator-controller-manager-d44cf6b75-db4dr\" (UID: \"dc53742c-7e71-49fa-9378-b26036c80275\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447395 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flmp\" (UniqueName: \"kubernetes.io/projected/c471d099-fa02-4463-9eb9-9d0f6a3832e6-kube-api-access-2flmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-d6h72\" (UID: \"c471d099-fa02-4463-9eb9-9d0f6a3832e6\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447459 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv8j\" (UniqueName: \"kubernetes.io/projected/080befba-c501-4f84-8644-6b9fda0d8d5f-kube-api-access-9hv8j\") pod \"octavia-operator-controller-manager-69f8888797-t9kgf\" (UID: \"080befba-c501-4f84-8644-6b9fda0d8d5f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447486 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5r9q\" (UniqueName: \"kubernetes.io/projected/588098b3-662f-4f6f-914c-8cb28e055ccd-kube-api-access-j5r9q\") pod \"mariadb-operator-controller-manager-6994f66f48-8v7q6\" (UID: \"588098b3-662f-4f6f-914c-8cb28e055ccd\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447505 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447526 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vl7m\" (UniqueName: \"kubernetes.io/projected/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-kube-api-access-9vl7m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447552 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7c9s\" (UniqueName: \"kubernetes.io/projected/aeccf47e-b953-4036-b271-be284b9ab385-kube-api-access-b7c9s\") pod \"ironic-operator-controller-manager-554564d7fc-f86jr\" (UID: \"aeccf47e-b953-4036-b271-be284b9ab385\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.447579 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d294k\" (UniqueName: \"kubernetes.io/projected/8eec8859-f388-4d81-bbce-0433a66a1ef7-kube-api-access-d294k\") pod \"nova-operator-controller-manager-567668f5cf-n6djt\" (UID: \"8eec8859-f388-4d81-bbce-0433a66a1ef7\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.451040 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-r74rt"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.456238 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.459388 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2zcfc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.463857 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d294k\" (UniqueName: \"kubernetes.io/projected/8eec8859-f388-4d81-bbce-0433a66a1ef7-kube-api-access-d294k\") pod \"nova-operator-controller-manager-567668f5cf-n6djt\" (UID: \"8eec8859-f388-4d81-bbce-0433a66a1ef7\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.465811 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5r9q\" (UniqueName: \"kubernetes.io/projected/588098b3-662f-4f6f-914c-8cb28e055ccd-kube-api-access-j5r9q\") pod \"mariadb-operator-controller-manager-6994f66f48-8v7q6\" (UID: \"588098b3-662f-4f6f-914c-8cb28e055ccd\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.467956 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7c9s\" (UniqueName: \"kubernetes.io/projected/aeccf47e-b953-4036-b271-be284b9ab385-kube-api-access-b7c9s\") pod \"ironic-operator-controller-manager-554564d7fc-f86jr\" (UID: \"aeccf47e-b953-4036-b271-be284b9ab385\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.482015 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.483771 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flmp\" (UniqueName: \"kubernetes.io/projected/c471d099-fa02-4463-9eb9-9d0f6a3832e6-kube-api-access-2flmp\") pod \"neutron-operator-controller-manager-64ddbf8bb-d6h72\" (UID: \"c471d099-fa02-4463-9eb9-9d0f6a3832e6\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.495960 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-r74rt"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.511746 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.518916 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.542371 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.543285 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.544686 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.547510 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dh64x" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548255 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwkq\" (UniqueName: \"kubernetes.io/projected/74e9952e-50ef-4389-aa77-8f6e9cc790a8-kube-api-access-jrwkq\") pod \"placement-operator-controller-manager-8497b45c89-6hpwv\" (UID: \"74e9952e-50ef-4389-aa77-8f6e9cc790a8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548288 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vl7m\" (UniqueName: \"kubernetes.io/projected/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-kube-api-access-9vl7m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548373 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57sh\" (UniqueName: \"kubernetes.io/projected/dc53742c-7e71-49fa-9378-b26036c80275-kube-api-access-n57sh\") pod \"ovn-operator-controller-manager-d44cf6b75-db4dr\" (UID: \"dc53742c-7e71-49fa-9378-b26036c80275\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76flf\" (UniqueName: \"kubernetes.io/projected/1f098ace-bbc4-46ee-8e72-ab65a59851eb-kube-api-access-76flf\") pod \"swift-operator-controller-manager-68f46476f-r74rt\" (UID: \"1f098ace-bbc4-46ee-8e72-ab65a59851eb\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.548444 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hv8j\" (UniqueName: \"kubernetes.io/projected/080befba-c501-4f84-8644-6b9fda0d8d5f-kube-api-access-9hv8j\") pod \"octavia-operator-controller-manager-69f8888797-t9kgf\" (UID: \"080befba-c501-4f84-8644-6b9fda0d8d5f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.548459 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.548508 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:08.048493197 +0000 UTC m=+857.337924835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.560754 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.579272 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hv8j\" (UniqueName: \"kubernetes.io/projected/080befba-c501-4f84-8644-6b9fda0d8d5f-kube-api-access-9hv8j\") pod \"octavia-operator-controller-manager-69f8888797-t9kgf\" (UID: \"080befba-c501-4f84-8644-6b9fda0d8d5f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.586987 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.587275 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.587876 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.588644 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vl7m\" (UniqueName: \"kubernetes.io/projected/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-kube-api-access-9vl7m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.592960 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57sh\" (UniqueName: \"kubernetes.io/projected/dc53742c-7e71-49fa-9378-b26036c80275-kube-api-access-n57sh\") pod \"ovn-operator-controller-manager-d44cf6b75-db4dr\" (UID: \"dc53742c-7e71-49fa-9378-b26036c80275\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.634937 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2szzj"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.636765 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.640365 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2szzj"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.640813 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.644329 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cgphr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.649299 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76flf\" (UniqueName: \"kubernetes.io/projected/1f098ace-bbc4-46ee-8e72-ab65a59851eb-kube-api-access-76flf\") pod \"swift-operator-controller-manager-68f46476f-r74rt\" (UID: \"1f098ace-bbc4-46ee-8e72-ab65a59851eb\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.649342 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwkq\" (UniqueName: \"kubernetes.io/projected/74e9952e-50ef-4389-aa77-8f6e9cc790a8-kube-api-access-jrwkq\") pod \"placement-operator-controller-manager-8497b45c89-6hpwv\" (UID: \"74e9952e-50ef-4389-aa77-8f6e9cc790a8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.649388 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4xn\" (UniqueName: \"kubernetes.io/projected/0e9da99c-56ee-4353-9378-c59a2c4e1608-kube-api-access-vf4xn\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g22tc\" (UID: \"0e9da99c-56ee-4353-9378-c59a2c4e1608\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.656261 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.656662 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.659095 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.665434 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.677394 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6vgpb" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.695888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76flf\" (UniqueName: \"kubernetes.io/projected/1f098ace-bbc4-46ee-8e72-ab65a59851eb-kube-api-access-76flf\") pod \"swift-operator-controller-manager-68f46476f-r74rt\" (UID: \"1f098ace-bbc4-46ee-8e72-ab65a59851eb\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.696553 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwkq\" (UniqueName: \"kubernetes.io/projected/74e9952e-50ef-4389-aa77-8f6e9cc790a8-kube-api-access-jrwkq\") pod \"placement-operator-controller-manager-8497b45c89-6hpwv\" (UID: \"74e9952e-50ef-4389-aa77-8f6e9cc790a8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.740917 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.751971 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf4xn\" (UniqueName: \"kubernetes.io/projected/0e9da99c-56ee-4353-9378-c59a2c4e1608-kube-api-access-vf4xn\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g22tc\" (UID: \"0e9da99c-56ee-4353-9378-c59a2c4e1608\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.752054 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxkq2\" (UniqueName: \"kubernetes.io/projected/e139553a-a68d-424d-95b5-9093ea05440b-kube-api-access-sxkq2\") pod \"test-operator-controller-manager-7866795846-2szzj\" (UID: \"e139553a-a68d-424d-95b5-9093ea05440b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.752079 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4pm\" (UniqueName: \"kubernetes.io/projected/e827e28d-ffd8-4f59-82bf-a6db1dab5413-kube-api-access-lj4pm\") pod \"watcher-operator-controller-manager-7d767c64df-hld6w\" (UID: \"e827e28d-ffd8-4f59-82bf-a6db1dab5413\") " pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.753496 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.774219 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.775125 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.777985 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.779372 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf4xn\" (UniqueName: \"kubernetes.io/projected/0e9da99c-56ee-4353-9378-c59a2c4e1608-kube-api-access-vf4xn\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g22tc\" (UID: \"0e9da99c-56ee-4353-9378-c59a2c4e1608\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.782232 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.782293 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sp6bz" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.782427 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.792657 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.793056 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.832657 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.833470 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.841794 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-m78br" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.851475 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4"] Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856546 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856587 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856630 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856654 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxkq2\" (UniqueName: \"kubernetes.io/projected/e139553a-a68d-424d-95b5-9093ea05440b-kube-api-access-sxkq2\") pod \"test-operator-controller-manager-7866795846-2szzj\" (UID: \"e139553a-a68d-424d-95b5-9093ea05440b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856680 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4pm\" (UniqueName: \"kubernetes.io/projected/e827e28d-ffd8-4f59-82bf-a6db1dab5413-kube-api-access-lj4pm\") pod \"watcher-operator-controller-manager-7d767c64df-hld6w\" (UID: \"e827e28d-ffd8-4f59-82bf-a6db1dab5413\") " pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.856830 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv998\" (UniqueName: \"kubernetes.io/projected/26f0a6ea-18fb-411a-b193-83938a4bbe19-kube-api-access-kv998\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.857399 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.857463 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:08.857445931 +0000 UTC m=+858.146877569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.888666 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4pm\" (UniqueName: \"kubernetes.io/projected/e827e28d-ffd8-4f59-82bf-a6db1dab5413-kube-api-access-lj4pm\") pod \"watcher-operator-controller-manager-7d767c64df-hld6w\" (UID: \"e827e28d-ffd8-4f59-82bf-a6db1dab5413\") " pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.891419 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxkq2\" (UniqueName: \"kubernetes.io/projected/e139553a-a68d-424d-95b5-9093ea05440b-kube-api-access-sxkq2\") pod \"test-operator-controller-manager-7866795846-2szzj\" (UID: \"e139553a-a68d-424d-95b5-9093ea05440b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.906924 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.958859 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.959652 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj849\" (UniqueName: \"kubernetes.io/projected/9574bff7-0aac-4a24-b69f-135ff968422e-kube-api-access-pj849\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcnz4\" (UID: \"9574bff7-0aac-4a24-b69f-135ff968422e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.959732 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv998\" (UniqueName: \"kubernetes.io/projected/26f0a6ea-18fb-411a-b193-83938a4bbe19-kube-api-access-kv998\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.959842 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.959995 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.960044 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:08.460029239 +0000 UTC m=+857.749460877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.960326 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: E0219 09:59:07.960447 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:08.460412649 +0000 UTC m=+857.749844277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.983250 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv998\" (UniqueName: \"kubernetes.io/projected/26f0a6ea-18fb-411a-b193-83938a4bbe19-kube-api-access-kv998\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:07 crc kubenswrapper[4873]: I0219 09:59:07.989064 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.042553 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.061384 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.061507 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj849\" (UniqueName: \"kubernetes.io/projected/9574bff7-0aac-4a24-b69f-135ff968422e-kube-api-access-pj849\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcnz4\" (UID: \"9574bff7-0aac-4a24-b69f-135ff968422e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.061861 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.061901 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:09.061889789 +0000 UTC m=+858.351321427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.089874 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj849\" (UniqueName: \"kubernetes.io/projected/9574bff7-0aac-4a24-b69f-135ff968422e-kube-api-access-pj849\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lcnz4\" (UID: \"9574bff7-0aac-4a24-b69f-135ff968422e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.091926 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.204382 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.240823 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.433187 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" event={"ID":"8d4b6c84-e5ed-4761-b7c7-95b21da856f7","Type":"ContainerStarted","Data":"24d02d511b5ec009a77212c2771e31882b68069a6d07f2fa771ed35159b5004c"} Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.436812 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" event={"ID":"f108f6ea-4506-48bf-b948-e367078c3dce","Type":"ContainerStarted","Data":"414b9f2975057bcfe166e6127938213546b3b6734d32b11fb953b8635da0d788"} Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.469614 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.469673 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.469791 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.469835 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:09.469822612 +0000 UTC m=+858.759254250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.470266 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.470346 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:09.470327595 +0000 UTC m=+858.759759233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.571971 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.648337 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.663864 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.688989 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeccf47e_b953_4036_b271_be284b9ab385.slice/crio-7cd316caba7af708ae47bcb905e8acc60aaaadb46d49e04c7c96b5d80f4a8a26 WatchSource:0}: Error finding container 7cd316caba7af708ae47bcb905e8acc60aaaadb46d49e04c7c96b5d80f4a8a26: Status 404 returned error can't find the container with id 7cd316caba7af708ae47bcb905e8acc60aaaadb46d49e04c7c96b5d80f4a8a26 Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.707541 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.715271 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf3484a_026e_4655_bfa8_e5292e2f62c5.slice/crio-a9042d0e19776e93d7021eddd1ada4c68f1260fe782317c12c84ba2df0681236 WatchSource:0}: Error finding container a9042d0e19776e93d7021eddd1ada4c68f1260fe782317c12c84ba2df0681236: Status 404 returned error can't find the container with id a9042d0e19776e93d7021eddd1ada4c68f1260fe782317c12c84ba2df0681236 Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.749034 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.763423 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.766694 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.770938 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd53d2bae_fcdd_408c_9950_440e841cc035.slice/crio-506a1bc0339c1f11cfcc484268f29e2ca84f37dbd40b9c4b435882b2f63456f6 WatchSource:0}: Error finding container 506a1bc0339c1f11cfcc484268f29e2ca84f37dbd40b9c4b435882b2f63456f6: Status 404 returned error can't find the container with id 506a1bc0339c1f11cfcc484268f29e2ca84f37dbd40b9c4b435882b2f63456f6 Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.775424 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e7ca3f2_f73b_4bac_93bb_68b2518d956e.slice/crio-adbe6fa4b0fde7878f9afdc95058300771b23557d9a2370d2af3469297f38cfe WatchSource:0}: Error finding container adbe6fa4b0fde7878f9afdc95058300771b23557d9a2370d2af3469297f38cfe: Status 404 returned error can't find the container with id adbe6fa4b0fde7878f9afdc95058300771b23557d9a2370d2af3469297f38cfe Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.780966 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.786306 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.796316 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc471d099_fa02_4463_9eb9_9d0f6a3832e6.slice/crio-0a643a8cb41354501e730acc4fc6090e1c1b0a0711553dd0927d52aa65fe54e1 WatchSource:0}: Error finding container 0a643a8cb41354501e730acc4fc6090e1c1b0a0711553dd0927d52aa65fe54e1: Status 404 returned error can't find the container with id 0a643a8cb41354501e730acc4fc6090e1c1b0a0711553dd0927d52aa65fe54e1 Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.879971 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.880430 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.880489 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:10.880472292 +0000 UTC m=+860.169903920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.886472 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-r74rt"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.898259 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.915631 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6"] Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.926934 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.943215 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod588098b3_662f_4f6f_914c_8cb28e055ccd.slice/crio-f61878bf71a0ed236d0b8f082a5c8aee8e815d36fe794f05e4bd1c520b54c91f WatchSource:0}: Error finding container f61878bf71a0ed236d0b8f082a5c8aee8e815d36fe794f05e4bd1c520b54c91f: Status 404 returned error can't find the container with id f61878bf71a0ed236d0b8f082a5c8aee8e815d36fe794f05e4bd1c520b54c91f Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.943933 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e9952e_50ef_4389_aa77_8f6e9cc790a8.slice/crio-ea545351537f5f31065a29341387acce181aab73c2f392144210e08eaf2b03a1 WatchSource:0}: Error finding container ea545351537f5f31065a29341387acce181aab73c2f392144210e08eaf2b03a1: Status 404 returned error can't find the container with id ea545351537f5f31065a29341387acce181aab73c2f392144210e08eaf2b03a1 Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.947505 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrwkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-6hpwv_openstack-operators(74e9952e-50ef-4389-aa77-8f6e9cc790a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.947748 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j5r9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-8v7q6_openstack-operators(588098b3-662f-4f6f-914c-8cb28e055ccd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.948878 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" podUID="588098b3-662f-4f6f-914c-8cb28e055ccd" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.948914 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" podUID="74e9952e-50ef-4389-aa77-8f6e9cc790a8" Feb 19 09:59:08 crc kubenswrapper[4873]: I0219 09:59:08.955254 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr"] Feb 19 09:59:08 crc kubenswrapper[4873]: W0219 09:59:08.974320 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc53742c_7e71_49fa_9378_b26036c80275.slice/crio-07fc4356403790343018fd2b4f7c70f856ed6770b013fb903beb9e6b5b75ca9a WatchSource:0}: Error finding container 07fc4356403790343018fd2b4f7c70f856ed6770b013fb903beb9e6b5b75ca9a: Status 404 returned error can't find the container with id 07fc4356403790343018fd2b4f7c70f856ed6770b013fb903beb9e6b5b75ca9a Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.983251 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n57sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-db4dr_openstack-operators(dc53742c-7e71-49fa-9378-b26036c80275): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:08 crc kubenswrapper[4873]: E0219 09:59:08.984481 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" podUID="dc53742c-7e71-49fa-9378-b26036c80275" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.011077 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4"] Feb 19 09:59:09 crc kubenswrapper[4873]: W0219 09:59:09.012060 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9574bff7_0aac_4a24_b69f_135ff968422e.slice/crio-c9382da34f60f936ac014ac6367e76bc99a2b1b12eda1e174441bac949ce650d WatchSource:0}: Error finding container c9382da34f60f936ac014ac6367e76bc99a2b1b12eda1e174441bac949ce650d: Status 404 returned error can't find the container with id c9382da34f60f936ac014ac6367e76bc99a2b1b12eda1e174441bac949ce650d Feb 19 09:59:09 crc kubenswrapper[4873]: W0219 09:59:09.013197 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode827e28d_ffd8_4f59_82bf_a6db1dab5413.slice/crio-e954b7b1e5181f3f4ed58f9b10384e274c353f3c859b0d98cd0610a69a1b7f0b WatchSource:0}: Error finding container e954b7b1e5181f3f4ed58f9b10384e274c353f3c859b0d98cd0610a69a1b7f0b: Status 404 returned error can't find the container with id e954b7b1e5181f3f4ed58f9b10384e274c353f3c859b0d98cd0610a69a1b7f0b Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.018509 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w"] Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.021812 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.20:5001/openstack-k8s-operators/watcher-operator:539263f45944cd14f527defc4d55afa08e448c3e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lj4pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7d767c64df-hld6w_openstack-operators(e827e28d-ffd8-4f59-82bf-a6db1dab5413): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.023479 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" podUID="e827e28d-ffd8-4f59-82bf-a6db1dab5413" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.024046 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vf4xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-g22tc_openstack-operators(0e9da99c-56ee-4353-9378-c59a2c4e1608): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:09 crc kubenswrapper[4873]: W0219 09:59:09.024391 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode139553a_a68d_424d_95b5_9093ea05440b.slice/crio-05f1995f615c41aab02673ce54add434398a6ece6a6b752715b8d014b30d3fa6 WatchSource:0}: Error finding container 05f1995f615c41aab02673ce54add434398a6ece6a6b752715b8d014b30d3fa6: Status 404 returned error can't find the container with id 05f1995f615c41aab02673ce54add434398a6ece6a6b752715b8d014b30d3fa6 Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.025291 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" podUID="0e9da99c-56ee-4353-9378-c59a2c4e1608" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.026292 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxkq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-2szzj_openstack-operators(e139553a-a68d-424d-95b5-9093ea05440b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.027440 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" podUID="e139553a-a68d-424d-95b5-9093ea05440b" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.029060 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc"] Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.038806 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-2szzj"] Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.083011 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.083205 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.083254 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:11.083241599 +0000 UTC m=+860.372673237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.449006 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" event={"ID":"9574bff7-0aac-4a24-b69f-135ff968422e","Type":"ContainerStarted","Data":"c9382da34f60f936ac014ac6367e76bc99a2b1b12eda1e174441bac949ce650d"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.450375 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" event={"ID":"d53d2bae-fcdd-408c-9950-440e841cc035","Type":"ContainerStarted","Data":"506a1bc0339c1f11cfcc484268f29e2ca84f37dbd40b9c4b435882b2f63456f6"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.451499 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" event={"ID":"1f098ace-bbc4-46ee-8e72-ab65a59851eb","Type":"ContainerStarted","Data":"18ae34ca37037d850fa7d437165bdff531fac4b2fe5a4193823b7c876903af12"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.454146 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" event={"ID":"43531003-74d3-43b9-b0f5-6fca42b21975","Type":"ContainerStarted","Data":"b7fe5f125f1c1ac76322ee1c155615ffd2c3d11b13774f9f36efff700fd0317d"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.455252 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" event={"ID":"0e9da99c-56ee-4353-9378-c59a2c4e1608","Type":"ContainerStarted","Data":"26221d1fd1110d89ce519a4fa80ad3ebb157b158c2593caf2ced1ff87c5ffebc"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.456408 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" event={"ID":"080befba-c501-4f84-8644-6b9fda0d8d5f","Type":"ContainerStarted","Data":"706cd4e4db84d0745ce9525c4129ba96886cf7f55fc80c326270369c15ee3538"} Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.456576 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" podUID="0e9da99c-56ee-4353-9378-c59a2c4e1608" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.458947 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" event={"ID":"74e9952e-50ef-4389-aa77-8f6e9cc790a8","Type":"ContainerStarted","Data":"ea545351537f5f31065a29341387acce181aab73c2f392144210e08eaf2b03a1"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.470218 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" event={"ID":"e4172fa9-b04e-4894-82d6-ec65ea92b004","Type":"ContainerStarted","Data":"833fcffe5cb2f21bd4409b4b6090d4db0f28b1db27a59b14fe1c322a32736e5a"} Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.471894 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" podUID="74e9952e-50ef-4389-aa77-8f6e9cc790a8" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.473189 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" event={"ID":"8eec8859-f388-4d81-bbce-0433a66a1ef7","Type":"ContainerStarted","Data":"c8dfd3e11ceed31f615c53da9ab506380c7f11be17a86f9cc0e630bcc87e53b1"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.477233 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" event={"ID":"588098b3-662f-4f6f-914c-8cb28e055ccd","Type":"ContainerStarted","Data":"f61878bf71a0ed236d0b8f082a5c8aee8e815d36fe794f05e4bd1c520b54c91f"} Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.483035 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" podUID="588098b3-662f-4f6f-914c-8cb28e055ccd" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.488734 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.488847 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.488966 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.489007 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:11.488995299 +0000 UTC m=+860.778426937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.489045 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.489063 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:11.489057571 +0000 UTC m=+860.778489209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.504575 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/openstack-k8s-operators/watcher-operator:539263f45944cd14f527defc4d55afa08e448c3e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" podUID="e827e28d-ffd8-4f59-82bf-a6db1dab5413" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.522997 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" podUID="dc53742c-7e71-49fa-9378-b26036c80275" Feb 19 09:59:09 crc kubenswrapper[4873]: E0219 09:59:09.523095 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" podUID="e139553a-a68d-424d-95b5-9093ea05440b" Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" event={"ID":"2b1c8872-b310-4994-819c-a8e472d8e522","Type":"ContainerStarted","Data":"615ce599ded1c085ac70ccf13569cd284845bbdb3e7da9a5ff392abf7268a6d3"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546360 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" event={"ID":"aeccf47e-b953-4036-b271-be284b9ab385","Type":"ContainerStarted","Data":"7cd316caba7af708ae47bcb905e8acc60aaaadb46d49e04c7c96b5d80f4a8a26"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546377 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" event={"ID":"2e7ca3f2-f73b-4bac-93bb-68b2518d956e","Type":"ContainerStarted","Data":"adbe6fa4b0fde7878f9afdc95058300771b23557d9a2370d2af3469297f38cfe"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546416 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" event={"ID":"e827e28d-ffd8-4f59-82bf-a6db1dab5413","Type":"ContainerStarted","Data":"e954b7b1e5181f3f4ed58f9b10384e274c353f3c859b0d98cd0610a69a1b7f0b"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546431 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" event={"ID":"c471d099-fa02-4463-9eb9-9d0f6a3832e6","Type":"ContainerStarted","Data":"0a643a8cb41354501e730acc4fc6090e1c1b0a0711553dd0927d52aa65fe54e1"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546444 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" event={"ID":"ecf3484a-026e-4655-bfa8-e5292e2f62c5","Type":"ContainerStarted","Data":"a9042d0e19776e93d7021eddd1ada4c68f1260fe782317c12c84ba2df0681236"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546461 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" event={"ID":"e139553a-a68d-424d-95b5-9093ea05440b","Type":"ContainerStarted","Data":"05f1995f615c41aab02673ce54add434398a6ece6a6b752715b8d014b30d3fa6"} Feb 19 09:59:09 crc kubenswrapper[4873]: I0219 09:59:09.546475 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" event={"ID":"dc53742c-7e71-49fa-9378-b26036c80275","Type":"ContainerStarted","Data":"07fc4356403790343018fd2b4f7c70f856ed6770b013fb903beb9e6b5b75ca9a"} Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.529154 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" podUID="588098b3-662f-4f6f-914c-8cb28e055ccd" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.529365 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" podUID="dc53742c-7e71-49fa-9378-b26036c80275" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.529732 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" podUID="0e9da99c-56ee-4353-9378-c59a2c4e1608" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.529814 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/openstack-k8s-operators/watcher-operator:539263f45944cd14f527defc4d55afa08e448c3e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" podUID="e827e28d-ffd8-4f59-82bf-a6db1dab5413" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.529822 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" podUID="e139553a-a68d-424d-95b5-9093ea05440b" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.531176 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" podUID="74e9952e-50ef-4389-aa77-8f6e9cc790a8" Feb 19 09:59:10 crc kubenswrapper[4873]: I0219 09:59:10.928561 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.928720 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:10 crc kubenswrapper[4873]: E0219 09:59:10.929050 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:14.929031916 +0000 UTC m=+864.218463554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: I0219 09:59:11.132037 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.132297 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.132506 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:15.132413998 +0000 UTC m=+864.421845636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.526704 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" podUID="e139553a-a68d-424d-95b5-9093ea05440b" Feb 19 09:59:11 crc kubenswrapper[4873]: I0219 09:59:11.536613 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:11 crc kubenswrapper[4873]: I0219 09:59:11.536671 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.536789 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.536827 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:15.536814455 +0000 UTC m=+864.826246093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.536873 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:11 crc kubenswrapper[4873]: E0219 09:59:11.536901 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:15.536892276 +0000 UTC m=+864.826323914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:14 crc kubenswrapper[4873]: I0219 09:59:14.991464 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:14 crc kubenswrapper[4873]: E0219 09:59:14.991646 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:14 crc kubenswrapper[4873]: E0219 09:59:14.991982 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:22.991963901 +0000 UTC m=+872.281395539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: I0219 09:59:15.194907 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.195078 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.195393 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:23.195374644 +0000 UTC m=+872.484806282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: I0219 09:59:15.606791 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:15 crc kubenswrapper[4873]: I0219 09:59:15.606915 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.606976 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.607074 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:23.60705554 +0000 UTC m=+872.896487178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.607161 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:15 crc kubenswrapper[4873]: E0219 09:59:15.607291 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:23.607268215 +0000 UTC m=+872.896699873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:20 crc kubenswrapper[4873]: E0219 09:59:20.636408 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 09:59:20 crc kubenswrapper[4873]: E0219 09:59:20.637156 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2flmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-d6h72_openstack-operators(c471d099-fa02-4463-9eb9-9d0f6a3832e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 09:59:20 crc kubenswrapper[4873]: E0219 09:59:20.638334 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" podUID="c471d099-fa02-4463-9eb9-9d0f6a3832e6" Feb 19 09:59:21 crc kubenswrapper[4873]: E0219 09:59:21.361337 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 19 09:59:21 crc kubenswrapper[4873]: E0219 09:59:21.361692 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvd65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-t2hfl_openstack-operators(e4172fa9-b04e-4894-82d6-ec65ea92b004): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 09:59:21 crc kubenswrapper[4873]: E0219 09:59:21.362888 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" podUID="e4172fa9-b04e-4894-82d6-ec65ea92b004" Feb 19 09:59:21 crc kubenswrapper[4873]: E0219 09:59:21.594733 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" podUID="e4172fa9-b04e-4894-82d6-ec65ea92b004" Feb 19 09:59:21 crc kubenswrapper[4873]: E0219 09:59:21.597085 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" podUID="c471d099-fa02-4463-9eb9-9d0f6a3832e6" Feb 19 09:59:22 crc kubenswrapper[4873]: I0219 09:59:22.486010 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.005743 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.005985 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d294k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-n6djt_openstack-operators(8eec8859-f388-4d81-bbce-0433a66a1ef7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.007403 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" podUID="8eec8859-f388-4d81-bbce-0433a66a1ef7" Feb 19 09:59:23 crc kubenswrapper[4873]: I0219 09:59:23.040543 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.040680 4873 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.040797 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert podName:3ff0155f-08fd-42f5-9b31-c3b9a7cefefe nodeName:}" failed. No retries permitted until 2026-02-19 09:59:39.040759713 +0000 UTC m=+888.330191361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert") pod "infra-operator-controller-manager-79d975b745-4t46s" (UID: "3ff0155f-08fd-42f5-9b31-c3b9a7cefefe") : secret "infra-operator-webhook-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: I0219 09:59:23.246061 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.246246 4873 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.246599 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert podName:515c6c0c-ae00-4ae1-ab3f-e22e5a585681 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:39.246575796 +0000 UTC m=+888.536007434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" (UID: "515c6c0c-ae00-4ae1-ab3f-e22e5a585681") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.608745 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" podUID="8eec8859-f388-4d81-bbce-0433a66a1ef7" Feb 19 09:59:23 crc kubenswrapper[4873]: I0219 09:59:23.651334 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:23 crc kubenswrapper[4873]: I0219 09:59:23.651398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.651547 4873 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.651569 4873 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.651626 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:39.651611127 +0000 UTC m=+888.941042765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "metrics-server-cert" not found Feb 19 09:59:23 crc kubenswrapper[4873]: E0219 09:59:23.651641 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs podName:26f0a6ea-18fb-411a-b193-83938a4bbe19 nodeName:}" failed. No retries permitted until 2026-02-19 09:59:39.651634328 +0000 UTC m=+888.941065966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs") pod "openstack-operator-controller-manager-77c7c45f98-q8khx" (UID: "26f0a6ea-18fb-411a-b193-83938a4bbe19") : secret "webhook-server-cert" not found Feb 19 09:59:28 crc kubenswrapper[4873]: E0219 09:59:28.220348 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 19 09:59:28 crc kubenswrapper[4873]: E0219 09:59:28.220760 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pj849,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lcnz4_openstack-operators(9574bff7-0aac-4a24-b69f-135ff968422e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 09:59:28 crc kubenswrapper[4873]: E0219 09:59:28.224118 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" podUID="9574bff7-0aac-4a24-b69f-135ff968422e" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.663772 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" event={"ID":"f108f6ea-4506-48bf-b948-e367078c3dce","Type":"ContainerStarted","Data":"29d931c2988d09b27a1e6f611064cdd823148838b7ad43fa11d4fc9b9b8f0f29"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.664603 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.674567 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" event={"ID":"ecf3484a-026e-4655-bfa8-e5292e2f62c5","Type":"ContainerStarted","Data":"d67de41f65a59288d5d6e22393c79582de02bf8df2658fdddf301f0a838f53ba"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.675165 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.681743 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" event={"ID":"8d4b6c84-e5ed-4761-b7c7-95b21da856f7","Type":"ContainerStarted","Data":"79e6f80de933ceb5722e0c6ebbbb62ad1cfbf85954dd07be035079693509ffc7"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.681999 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.682870 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" podStartSLOduration=7.811742368 podStartE2EDuration="22.682856256s" podCreationTimestamp="2026-02-19 09:59:06 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.110396515 +0000 UTC m=+857.399828153" lastFinishedPulling="2026-02-19 09:59:22.981510403 +0000 UTC m=+872.270942041" observedRunningTime="2026-02-19 09:59:28.681611415 +0000 UTC m=+877.971043053" watchObservedRunningTime="2026-02-19 09:59:28.682856256 +0000 UTC m=+877.972287894" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.683065 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" event={"ID":"1f098ace-bbc4-46ee-8e72-ab65a59851eb","Type":"ContainerStarted","Data":"7edecaa3a0e21c14a282d96f052579159fde142b47d7bef35a09855e261458b3"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.683644 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.690062 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" event={"ID":"2e7ca3f2-f73b-4bac-93bb-68b2518d956e","Type":"ContainerStarted","Data":"1826e72e9b065f61f4e2df81369c7ecd90280acf1c40019caaee869094c89687"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.690255 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.707196 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" event={"ID":"2b1c8872-b310-4994-819c-a8e472d8e522","Type":"ContainerStarted","Data":"98e29629da093cb432a8dfd1c23886c00b3cbf4873382f26efa14b76c8149a8f"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.707342 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.713808 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" event={"ID":"aeccf47e-b953-4036-b271-be284b9ab385","Type":"ContainerStarted","Data":"96e2006a4998e860d5bf286406a428397babc6f1c17e7781521102fcc8a8d198"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.713961 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.719713 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" podStartSLOduration=3.305696685 podStartE2EDuration="21.719701494s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.731328707 +0000 UTC m=+858.020760345" lastFinishedPulling="2026-02-19 09:59:27.145333476 +0000 UTC m=+876.434765154" observedRunningTime="2026-02-19 09:59:28.718528395 +0000 UTC m=+878.007960053" watchObservedRunningTime="2026-02-19 09:59:28.719701494 +0000 UTC m=+878.009133132" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.734656 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" event={"ID":"080befba-c501-4f84-8644-6b9fda0d8d5f","Type":"ContainerStarted","Data":"a0dcc3dc7a4595ec37eb7b12925d55cd915d9823fe599d69974aadaedae2f691"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.734720 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.746396 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" event={"ID":"d53d2bae-fcdd-408c-9950-440e841cc035","Type":"ContainerStarted","Data":"c4e08070c9942f76bd9fad5c909640a0b97649c29a8681bc95f5b4226a48f44f"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.747182 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.750703 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" podStartSLOduration=7.46964516 podStartE2EDuration="21.750685127s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.700337743 +0000 UTC m=+857.989769381" lastFinishedPulling="2026-02-19 09:59:22.98137771 +0000 UTC m=+872.270809348" observedRunningTime="2026-02-19 09:59:28.74594388 +0000 UTC m=+878.035375518" watchObservedRunningTime="2026-02-19 09:59:28.750685127 +0000 UTC m=+878.040116765" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.765763 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" event={"ID":"43531003-74d3-43b9-b0f5-6fca42b21975","Type":"ContainerStarted","Data":"1244d5df9af620976d080b9e36a7e9ca024ee40b4ceb2fe45b43c0af83c3d05e"} Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.765805 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:28 crc kubenswrapper[4873]: E0219 09:59:28.770295 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" podUID="9574bff7-0aac-4a24-b69f-135ff968422e" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.775575 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" podStartSLOduration=7.4708913710000004 podStartE2EDuration="21.77555846s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.675667925 +0000 UTC m=+857.965099563" lastFinishedPulling="2026-02-19 09:59:22.980335014 +0000 UTC m=+872.269766652" observedRunningTime="2026-02-19 09:59:28.772982887 +0000 UTC m=+878.062414525" watchObservedRunningTime="2026-02-19 09:59:28.77555846 +0000 UTC m=+878.064990098" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.803809 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" podStartSLOduration=3.984579821 podStartE2EDuration="22.803791846s" podCreationTimestamp="2026-02-19 09:59:06 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.324184843 +0000 UTC m=+857.613616481" lastFinishedPulling="2026-02-19 09:59:27.143396818 +0000 UTC m=+876.432828506" observedRunningTime="2026-02-19 09:59:28.801401717 +0000 UTC m=+878.090833355" watchObservedRunningTime="2026-02-19 09:59:28.803791846 +0000 UTC m=+878.093223474" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.827399 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" podStartSLOduration=8.628237799 podStartE2EDuration="22.827381047s" podCreationTimestamp="2026-02-19 09:59:06 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.781475063 +0000 UTC m=+858.070906701" lastFinishedPulling="2026-02-19 09:59:22.980618311 +0000 UTC m=+872.270049949" observedRunningTime="2026-02-19 09:59:28.823088102 +0000 UTC m=+878.112519740" watchObservedRunningTime="2026-02-19 09:59:28.827381047 +0000 UTC m=+878.116812675" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.847459 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" podStartSLOduration=7.797441279 podStartE2EDuration="21.847445732s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.931326416 +0000 UTC m=+858.220758054" lastFinishedPulling="2026-02-19 09:59:22.981330869 +0000 UTC m=+872.270762507" observedRunningTime="2026-02-19 09:59:28.846385616 +0000 UTC m=+878.135817244" watchObservedRunningTime="2026-02-19 09:59:28.847445732 +0000 UTC m=+878.136877370" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.878471 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" podStartSLOduration=3.635320858 podStartE2EDuration="21.878450136s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.901365717 +0000 UTC m=+858.190797355" lastFinishedPulling="2026-02-19 09:59:27.144494965 +0000 UTC m=+876.433926633" observedRunningTime="2026-02-19 09:59:28.873836482 +0000 UTC m=+878.163268120" watchObservedRunningTime="2026-02-19 09:59:28.878450136 +0000 UTC m=+878.167881774" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.903914 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" podStartSLOduration=4.365301274 podStartE2EDuration="22.903895843s" podCreationTimestamp="2026-02-19 09:59:06 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.604466081 +0000 UTC m=+857.893897719" lastFinishedPulling="2026-02-19 09:59:27.14306061 +0000 UTC m=+876.432492288" observedRunningTime="2026-02-19 09:59:28.900380046 +0000 UTC m=+878.189811684" watchObservedRunningTime="2026-02-19 09:59:28.903895843 +0000 UTC m=+878.193327481" Feb 19 09:59:28 crc kubenswrapper[4873]: I0219 09:59:28.926360 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" podStartSLOduration=8.722358359 podStartE2EDuration="22.926345106s" podCreationTimestamp="2026-02-19 09:59:06 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.776269755 +0000 UTC m=+858.065701383" lastFinishedPulling="2026-02-19 09:59:22.980256492 +0000 UTC m=+872.269688130" observedRunningTime="2026-02-19 09:59:28.92448355 +0000 UTC m=+878.213915188" watchObservedRunningTime="2026-02-19 09:59:28.926345106 +0000 UTC m=+878.215776744" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.800085 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" event={"ID":"588098b3-662f-4f6f-914c-8cb28e055ccd","Type":"ContainerStarted","Data":"556f49e02480733ef35a0122aad086aa30f4e5328f5458b59f88dda31745f1f1"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.802072 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.804444 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" event={"ID":"e827e28d-ffd8-4f59-82bf-a6db1dab5413","Type":"ContainerStarted","Data":"129c2f452ff95128f6ec63cabadae31295f2c92340fc15d3521904f4b92c9407"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.805247 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.807162 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" event={"ID":"0e9da99c-56ee-4353-9378-c59a2c4e1608","Type":"ContainerStarted","Data":"af72e2e3aced14f4a4dad927f0dd599cac8ddbaec28f45701eb821856e9a3cad"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.807440 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.809611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" event={"ID":"74e9952e-50ef-4389-aa77-8f6e9cc790a8","Type":"ContainerStarted","Data":"957454f5f698313a258239d295f2a02b75685f24992e9fde3611236a7684ae24"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.809917 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.811591 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" event={"ID":"e139553a-a68d-424d-95b5-9093ea05440b","Type":"ContainerStarted","Data":"aadf9f41334181b03820e6ee84b0ee433a46d40e95ded820cb0c23d5f04dd38c"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.812014 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.813609 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" event={"ID":"dc53742c-7e71-49fa-9378-b26036c80275","Type":"ContainerStarted","Data":"b0fb5a0bcbe2638cc1fae6ab7ea0d86be42f597f41f2ec6cdaed6ebfe6646ab0"} Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.814051 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.849469 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" podStartSLOduration=3.111922571 podStartE2EDuration="26.849453141s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:09.023958869 +0000 UTC m=+858.313390497" lastFinishedPulling="2026-02-19 09:59:32.761489389 +0000 UTC m=+882.050921067" observedRunningTime="2026-02-19 09:59:33.8453717 +0000 UTC m=+883.134803348" watchObservedRunningTime="2026-02-19 09:59:33.849453141 +0000 UTC m=+883.138884779" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.850640 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" podStartSLOduration=3.03643035 podStartE2EDuration="26.85063572s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.947694309 +0000 UTC m=+858.237125947" lastFinishedPulling="2026-02-19 09:59:32.761899679 +0000 UTC m=+882.051331317" observedRunningTime="2026-02-19 09:59:33.831416516 +0000 UTC m=+883.120848154" watchObservedRunningTime="2026-02-19 09:59:33.85063572 +0000 UTC m=+883.140067358" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.862778 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" podStartSLOduration=3.048748404 podStartE2EDuration="26.862762829s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.947398222 +0000 UTC m=+858.236829860" lastFinishedPulling="2026-02-19 09:59:32.761412617 +0000 UTC m=+882.050844285" observedRunningTime="2026-02-19 09:59:33.861522778 +0000 UTC m=+883.150954416" watchObservedRunningTime="2026-02-19 09:59:33.862762829 +0000 UTC m=+883.152194467" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.882940 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" podStartSLOduration=3.006762219 podStartE2EDuration="26.882920645s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:09.021703253 +0000 UTC m=+858.311134891" lastFinishedPulling="2026-02-19 09:59:32.897861669 +0000 UTC m=+882.187293317" observedRunningTime="2026-02-19 09:59:33.87823689 +0000 UTC m=+883.167668528" watchObservedRunningTime="2026-02-19 09:59:33.882920645 +0000 UTC m=+883.172352283" Feb 19 09:59:33 crc kubenswrapper[4873]: I0219 09:59:33.894584 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" podStartSLOduration=3.120120502 podStartE2EDuration="26.894564242s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.982883846 +0000 UTC m=+858.272315484" lastFinishedPulling="2026-02-19 09:59:32.757327576 +0000 UTC m=+882.046759224" observedRunningTime="2026-02-19 09:59:33.892979913 +0000 UTC m=+883.182411561" watchObservedRunningTime="2026-02-19 09:59:33.894564242 +0000 UTC m=+883.183995890" Feb 19 09:59:34 crc kubenswrapper[4873]: I0219 09:59:34.510609 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" podStartSLOduration=3.779141884 podStartE2EDuration="27.510594374s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:09.026207574 +0000 UTC m=+858.315639212" lastFinishedPulling="2026-02-19 09:59:32.757660054 +0000 UTC m=+882.047091702" observedRunningTime="2026-02-19 09:59:33.928403376 +0000 UTC m=+883.217835024" watchObservedRunningTime="2026-02-19 09:59:34.510594374 +0000 UTC m=+883.800026012" Feb 19 09:59:34 crc kubenswrapper[4873]: I0219 09:59:34.825122 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" event={"ID":"e4172fa9-b04e-4894-82d6-ec65ea92b004","Type":"ContainerStarted","Data":"365db5b99786833a9aa194aeb1ee3d11c9cc32e014375f17139206bb6e74ec4d"} Feb 19 09:59:34 crc kubenswrapper[4873]: I0219 09:59:34.825492 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:34 crc kubenswrapper[4873]: I0219 09:59:34.850491 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" podStartSLOduration=2.24553966 podStartE2EDuration="27.85046534s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.770675027 +0000 UTC m=+858.060106665" lastFinishedPulling="2026-02-19 09:59:34.375600707 +0000 UTC m=+883.665032345" observedRunningTime="2026-02-19 09:59:34.840902924 +0000 UTC m=+884.130334562" watchObservedRunningTime="2026-02-19 09:59:34.85046534 +0000 UTC m=+884.139897018" Feb 19 09:59:35 crc kubenswrapper[4873]: I0219 09:59:35.834867 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" event={"ID":"8eec8859-f388-4d81-bbce-0433a66a1ef7","Type":"ContainerStarted","Data":"c3425d7eee071d6bac7344102c4233d900c4d7682b2503d76e8ff928df725026"} Feb 19 09:59:35 crc kubenswrapper[4873]: I0219 09:59:35.835231 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:35 crc kubenswrapper[4873]: I0219 09:59:35.865981 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" podStartSLOduration=2.777402185 podStartE2EDuration="28.865949324s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.782671742 +0000 UTC m=+858.072103380" lastFinishedPulling="2026-02-19 09:59:34.871218841 +0000 UTC m=+884.160650519" observedRunningTime="2026-02-19 09:59:35.858255715 +0000 UTC m=+885.147687393" watchObservedRunningTime="2026-02-19 09:59:35.865949324 +0000 UTC m=+885.155381002" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.388766 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-t54x9" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.398504 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-vgxsl" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.418047 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-vwx5n" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.436419 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-r9b5b" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.523895 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-t7mwr" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.549747 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f86jr" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.593257 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hqmvw" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.594912 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-cx7xf" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.743234 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-t9kgf" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.796010 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-r74rt" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.851883 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" event={"ID":"c471d099-fa02-4463-9eb9-9d0f6a3832e6","Type":"ContainerStarted","Data":"8dda634260001ca8bb82be92714b9213968e96e0e99e9572a71e53f08462f313"} Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.852068 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:37 crc kubenswrapper[4873]: I0219 09:59:37.865087 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" podStartSLOduration=2.721383287 podStartE2EDuration="30.865075441s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.8016436 +0000 UTC m=+858.091075238" lastFinishedPulling="2026-02-19 09:59:36.945335754 +0000 UTC m=+886.234767392" observedRunningTime="2026-02-19 09:59:37.864808344 +0000 UTC m=+887.154239982" watchObservedRunningTime="2026-02-19 09:59:37.865075441 +0000 UTC m=+887.154507079" Feb 19 09:59:38 crc kubenswrapper[4873]: I0219 09:59:38.047723 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7d767c64df-hld6w" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.095654 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.101856 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ff0155f-08fd-42f5-9b31-c3b9a7cefefe-cert\") pod \"infra-operator-controller-manager-79d975b745-4t46s\" (UID: \"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.260719 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.298482 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.304700 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/515c6c0c-ae00-4ae1-ab3f-e22e5a585681-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv\" (UID: \"515c6c0c-ae00-4ae1-ab3f-e22e5a585681\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.563163 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.715939 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.716476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.731771 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-metrics-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.734464 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/26f0a6ea-18fb-411a-b193-83938a4bbe19-webhook-certs\") pod \"openstack-operator-controller-manager-77c7c45f98-q8khx\" (UID: \"26f0a6ea-18fb-411a-b193-83938a4bbe19\") " pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.747465 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4t46s"] Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.872516 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" event={"ID":"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe","Type":"ContainerStarted","Data":"01c3a3adfdd88439bdfe6b1a13999b1d7fe0011c8f59f98ae2ef2dbe2e6e9998"} Feb 19 09:59:39 crc kubenswrapper[4873]: I0219 09:59:39.985469 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.062991 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv"] Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.231334 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx"] Feb 19 09:59:40 crc kubenswrapper[4873]: W0219 09:59:40.239703 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26f0a6ea_18fb_411a_b193_83938a4bbe19.slice/crio-45ffdab2a1e2445dc1d551acb98a9598acc4a126115c29e85f2d4dc351ba9007 WatchSource:0}: Error finding container 45ffdab2a1e2445dc1d551acb98a9598acc4a126115c29e85f2d4dc351ba9007: Status 404 returned error can't find the container with id 45ffdab2a1e2445dc1d551acb98a9598acc4a126115c29e85f2d4dc351ba9007 Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.882388 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" event={"ID":"26f0a6ea-18fb-411a-b193-83938a4bbe19","Type":"ContainerStarted","Data":"61063320a2d900665cae5b9138cfaf5c31fbb1ced795d32d4f69ab2c83cc0556"} Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.882430 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" event={"ID":"26f0a6ea-18fb-411a-b193-83938a4bbe19","Type":"ContainerStarted","Data":"45ffdab2a1e2445dc1d551acb98a9598acc4a126115c29e85f2d4dc351ba9007"} Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.882970 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.887663 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" event={"ID":"515c6c0c-ae00-4ae1-ab3f-e22e5a585681","Type":"ContainerStarted","Data":"bcfe01fe8056c96d6a78880051378c0dacda08484df5c7eb300694fe6fb6c7a0"} Feb 19 09:59:40 crc kubenswrapper[4873]: I0219 09:59:40.915249 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" podStartSLOduration=33.915227348 podStartE2EDuration="33.915227348s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:59:40.911231899 +0000 UTC m=+890.200709888" watchObservedRunningTime="2026-02-19 09:59:40.915227348 +0000 UTC m=+890.204658986" Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.904263 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" event={"ID":"515c6c0c-ae00-4ae1-ab3f-e22e5a585681","Type":"ContainerStarted","Data":"39b5b23436213b86e36a6f189ff575a10fd54aa886bb7527359f39a1dcb974b2"} Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.904715 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.906901 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" event={"ID":"3ff0155f-08fd-42f5-9b31-c3b9a7cefefe","Type":"ContainerStarted","Data":"abb4d278c8f5ca0e86bbac695d92d7f163399f825eddb0cd4b9050686dc342bd"} Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.907115 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.910987 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" event={"ID":"9574bff7-0aac-4a24-b69f-135ff968422e","Type":"ContainerStarted","Data":"5c9f43e3e91a76192aa79b3001b3fd64a0e6d6118f3862d95f7a3d8822800134"} Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.938862 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" podStartSLOduration=33.960975806 podStartE2EDuration="35.938840228s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:40.083749017 +0000 UTC m=+889.373180655" lastFinishedPulling="2026-02-19 09:59:42.061613439 +0000 UTC m=+891.351045077" observedRunningTime="2026-02-19 09:59:42.9368942 +0000 UTC m=+892.226325878" watchObservedRunningTime="2026-02-19 09:59:42.938840228 +0000 UTC m=+892.228271866" Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.959593 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" podStartSLOduration=33.660798227 podStartE2EDuration="35.959570678s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:39.755437346 +0000 UTC m=+889.044868984" lastFinishedPulling="2026-02-19 09:59:42.054209777 +0000 UTC m=+891.343641435" observedRunningTime="2026-02-19 09:59:42.956583524 +0000 UTC m=+892.246015202" watchObservedRunningTime="2026-02-19 09:59:42.959570678 +0000 UTC m=+892.249002356" Feb 19 09:59:42 crc kubenswrapper[4873]: I0219 09:59:42.988779 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lcnz4" podStartSLOduration=2.945286513 podStartE2EDuration="35.988757387s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:09.015362777 +0000 UTC m=+858.304794415" lastFinishedPulling="2026-02-19 09:59:42.058833621 +0000 UTC m=+891.348265289" observedRunningTime="2026-02-19 09:59:42.978020672 +0000 UTC m=+892.267452310" watchObservedRunningTime="2026-02-19 09:59:42.988757387 +0000 UTC m=+892.278189045" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.524435 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-t2hfl" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.563394 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-d6h72" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.644470 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8v7q6" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.665247 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n6djt" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.757363 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-db4dr" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.797845 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-6hpwv" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.910200 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g22tc" Feb 19 09:59:47 crc kubenswrapper[4873]: I0219 09:59:47.993209 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-2szzj" Feb 19 09:59:48 crc kubenswrapper[4873]: I0219 09:59:48.240363 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:59:48 crc kubenswrapper[4873]: I0219 09:59:48.240440 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:59:49 crc kubenswrapper[4873]: I0219 09:59:49.270211 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4t46s" Feb 19 09:59:49 crc kubenswrapper[4873]: I0219 09:59:49.570795 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv" Feb 19 09:59:49 crc kubenswrapper[4873]: I0219 09:59:49.992680 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77c7c45f98-q8khx" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.187572 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm"] Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.189732 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.198559 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm"] Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.199885 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.200428 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.328521 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkp7\" (UniqueName: \"kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.328639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.328785 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.430208 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.430366 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.430464 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbkp7\" (UniqueName: \"kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.432530 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.442684 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.451541 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbkp7\" (UniqueName: \"kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7\") pod \"collect-profiles-29524920-796dm\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.519458 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:00 crc kubenswrapper[4873]: I0219 10:00:00.978640 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm"] Feb 19 10:00:00 crc kubenswrapper[4873]: W0219 10:00:00.991012 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod890a4af6_c400_4f2c_a387_edcbbc821b11.slice/crio-c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091 WatchSource:0}: Error finding container c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091: Status 404 returned error can't find the container with id c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091 Feb 19 10:00:01 crc kubenswrapper[4873]: I0219 10:00:01.062128 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" event={"ID":"890a4af6-c400-4f2c-a387-edcbbc821b11","Type":"ContainerStarted","Data":"c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091"} Feb 19 10:00:02 crc kubenswrapper[4873]: I0219 10:00:02.071211 4873 generic.go:334] "Generic (PLEG): container finished" podID="890a4af6-c400-4f2c-a387-edcbbc821b11" containerID="2ea87556ea1e2777f378238131c83ccd55a7eac5410c13097afbd46ee33f0929" exitCode=0 Feb 19 10:00:02 crc kubenswrapper[4873]: I0219 10:00:02.071773 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" event={"ID":"890a4af6-c400-4f2c-a387-edcbbc821b11","Type":"ContainerDied","Data":"2ea87556ea1e2777f378238131c83ccd55a7eac5410c13097afbd46ee33f0929"} Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.360437 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.478509 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume\") pod \"890a4af6-c400-4f2c-a387-edcbbc821b11\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.478563 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbkp7\" (UniqueName: \"kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7\") pod \"890a4af6-c400-4f2c-a387-edcbbc821b11\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.478630 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume\") pod \"890a4af6-c400-4f2c-a387-edcbbc821b11\" (UID: \"890a4af6-c400-4f2c-a387-edcbbc821b11\") " Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.479307 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume" (OuterVolumeSpecName: "config-volume") pod "890a4af6-c400-4f2c-a387-edcbbc821b11" (UID: "890a4af6-c400-4f2c-a387-edcbbc821b11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.490468 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7" (OuterVolumeSpecName: "kube-api-access-kbkp7") pod "890a4af6-c400-4f2c-a387-edcbbc821b11" (UID: "890a4af6-c400-4f2c-a387-edcbbc821b11"). InnerVolumeSpecName "kube-api-access-kbkp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.495887 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "890a4af6-c400-4f2c-a387-edcbbc821b11" (UID: "890a4af6-c400-4f2c-a387-edcbbc821b11"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.579881 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/890a4af6-c400-4f2c-a387-edcbbc821b11-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.579926 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890a4af6-c400-4f2c-a387-edcbbc821b11-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:03 crc kubenswrapper[4873]: I0219 10:00:03.580003 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbkp7\" (UniqueName: \"kubernetes.io/projected/890a4af6-c400-4f2c-a387-edcbbc821b11-kube-api-access-kbkp7\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:04 crc kubenswrapper[4873]: I0219 10:00:04.088581 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" event={"ID":"890a4af6-c400-4f2c-a387-edcbbc821b11","Type":"ContainerDied","Data":"c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091"} Feb 19 10:00:04 crc kubenswrapper[4873]: I0219 10:00:04.088880 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c82b10b78a2df67bafea6ed3d2a04f1a658d9480b1b665d50562afe3fc6cc091" Feb 19 10:00:04 crc kubenswrapper[4873]: I0219 10:00:04.088832 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.425596 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:08 crc kubenswrapper[4873]: E0219 10:00:08.426505 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890a4af6-c400-4f2c-a387-edcbbc821b11" containerName="collect-profiles" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.426517 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="890a4af6-c400-4f2c-a387-edcbbc821b11" containerName="collect-profiles" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.426669 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="890a4af6-c400-4f2c-a387-edcbbc821b11" containerName="collect-profiles" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.427495 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.431619 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.431975 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.432226 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8n6zj" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.432493 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.442817 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.481072 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.482181 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.484169 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.493720 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.550869 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnswl\" (UniqueName: \"kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.550925 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.550957 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.550985 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ddx\" (UniqueName: \"kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.551017 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.652042 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnswl\" (UniqueName: \"kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.652324 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.652411 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.652488 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ddx\" (UniqueName: \"kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.652568 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.653254 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.653344 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.653410 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.673067 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ddx\" (UniqueName: \"kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx\") pod \"dnsmasq-dns-7bb9bf987-bjckx\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.673094 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnswl\" (UniqueName: \"kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl\") pod \"dnsmasq-dns-57b9d58665-gfr42\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.758699 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:08 crc kubenswrapper[4873]: I0219 10:00:08.798172 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:09 crc kubenswrapper[4873]: I0219 10:00:09.109822 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:09 crc kubenswrapper[4873]: I0219 10:00:09.145252 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" event={"ID":"b3a77d5e-b932-466f-a391-983ffef7c5ae","Type":"ContainerStarted","Data":"909aed776126287df1a7798864d3d0881f670c7df611ac6ef1496c9f130ee423"} Feb 19 10:00:09 crc kubenswrapper[4873]: I0219 10:00:09.405473 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:09 crc kubenswrapper[4873]: W0219 10:00:09.413380 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea1b96b_f9da_4733_a537_a536ec66edc0.slice/crio-68a3e971085bdee9176e4f2a37706001ac7c9cd7629296c14068391c6eb2512d WatchSource:0}: Error finding container 68a3e971085bdee9176e4f2a37706001ac7c9cd7629296c14068391c6eb2512d: Status 404 returned error can't find the container with id 68a3e971085bdee9176e4f2a37706001ac7c9cd7629296c14068391c6eb2512d Feb 19 10:00:10 crc kubenswrapper[4873]: I0219 10:00:10.153274 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" event={"ID":"bea1b96b-f9da-4733-a537-a536ec66edc0","Type":"ContainerStarted","Data":"68a3e971085bdee9176e4f2a37706001ac7c9cd7629296c14068391c6eb2512d"} Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.091065 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.118775 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.119920 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.138388 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.312306 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.312568 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.312744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcf5\" (UniqueName: \"kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.414466 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.414540 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.414575 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcf5\" (UniqueName: \"kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.415530 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.415664 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.424629 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.441511 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcf5\" (UniqueName: \"kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5\") pod \"dnsmasq-dns-7569d6d65f-54dks\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.467061 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.468178 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.505940 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.515304 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.515371 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.515417 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlb64\" (UniqueName: \"kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.616750 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.616801 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.616849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlb64\" (UniqueName: \"kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.617626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.617698 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.642888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlb64\" (UniqueName: \"kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64\") pod \"dnsmasq-dns-684f645dc-zkgql\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.736623 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.782381 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.830393 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.849510 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.850705 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.860185 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.920593 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.920651 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5m2m\" (UniqueName: \"kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:12 crc kubenswrapper[4873]: I0219 10:00:12.920752 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.021654 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.023852 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5m2m\" (UniqueName: \"kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.024171 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.022491 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.024950 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.040005 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5m2m\" (UniqueName: \"kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m\") pod \"dnsmasq-dns-58ff7f48c5-nqbz4\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.174926 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.295211 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.296666 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.299465 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.299612 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.299728 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.299819 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.299993 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.300217 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.300355 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-4gb2b" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332336 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332376 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332413 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332450 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332488 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332586 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332631 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332695 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332717 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332760 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.332806 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzqn\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-kube-api-access-gfzqn\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434020 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434412 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434447 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434478 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434492 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434515 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434533 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434551 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzqn\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-kube-api-access-gfzqn\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434592 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434617 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.434633 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.435489 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.437659 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.438221 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.438368 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.441810 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.442359 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.446532 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.447042 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.455804 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.456416 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.458499 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzqn\" (UniqueName: \"kubernetes.io/projected/da89f0ff-c51c-4c4a-8df4-f7787d29ddd2-kube-api-access-gfzqn\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.468474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.634179 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.644721 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.646239 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.653578 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.653655 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.653676 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.653698 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.653999 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.654252 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fnhrw" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.654388 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.667337 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.737725 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.737780 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.737845 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjp76\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.737866 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738001 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738028 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738051 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738068 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738180 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738309 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.738370 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840462 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840480 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840497 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840519 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840555 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840578 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840597 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840621 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840650 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjp76\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.840669 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.841068 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.841843 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.842503 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.842701 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.843456 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.844242 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.847248 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.847904 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.847966 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.849353 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.869157 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.874630 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjp76\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76\") pod \"rabbitmq-server-0\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.975450 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.978349 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.979526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.985570 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6k7rl" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.987653 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.987821 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.987945 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.988042 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.988168 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.988280 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:00:13 crc kubenswrapper[4873]: I0219 10:00:13.996761 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.145811 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.145853 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.145895 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.145914 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.145933 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146016 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146039 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146131 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146169 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146206 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.146326 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248065 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248167 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248231 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248264 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248316 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248347 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248422 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248466 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248487 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.248513 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.249362 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.249712 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.249978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.250137 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.250278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.250646 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.253394 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.270169 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.272841 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.272935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.273521 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.290188 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n\") pod \"rabbitmq-cell1-server-0\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:14 crc kubenswrapper[4873]: I0219 10:00:14.324980 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.431997 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.436609 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.443745 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.446072 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.446178 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.446625 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.446689 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vvcp2" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.454921 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568148 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568230 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2kz\" (UniqueName: \"kubernetes.io/projected/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kube-api-access-8q2kz\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568346 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kolla-config\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-default\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568737 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.568777 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670061 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2kz\" (UniqueName: \"kubernetes.io/projected/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kube-api-access-8q2kz\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670190 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670254 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kolla-config\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670276 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670321 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-default\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670343 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670365 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.670878 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.671447 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.671738 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kolla-config\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.671789 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-config-data-default\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.680547 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.681988 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.682907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.695500 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.700790 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2kz\" (UniqueName: \"kubernetes.io/projected/f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964-kube-api-access-8q2kz\") pod \"openstack-galera-0\" (UID: \"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964\") " pod="openstack/openstack-galera-0" Feb 19 10:00:15 crc kubenswrapper[4873]: I0219 10:00:15.758988 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.809501 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.810823 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.813200 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.813289 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.815533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.821809 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xfxn6" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.828747 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888247 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888310 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888343 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888365 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888425 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3385c22-baa0-4261-b498-6a09c8768520-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888501 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.888610 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwswv\" (UniqueName: \"kubernetes.io/projected/e3385c22-baa0-4261-b498-6a09c8768520-kube-api-access-qwswv\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.899083 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.900057 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.902016 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.902255 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.903220 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-d42bp" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.916683 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.989929 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3385c22-baa0-4261-b498-6a09c8768520-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.989975 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-kolla-config\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990017 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990056 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwswv\" (UniqueName: \"kubernetes.io/projected/e3385c22-baa0-4261-b498-6a09c8768520-kube-api-access-qwswv\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990091 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990207 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990242 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990274 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990302 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-config-data\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990275 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990327 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990453 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3385c22-baa0-4261-b498-6a09c8768520-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990474 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990518 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.990552 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nlz\" (UniqueName: \"kubernetes.io/projected/21bb5d7d-6565-484a-af2d-0edcff2729b3-kube-api-access-l9nlz\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.991017 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.991051 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.991945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3385c22-baa0-4261-b498-6a09c8768520-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:16 crc kubenswrapper[4873]: I0219 10:00:16.995858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.005545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3385c22-baa0-4261-b498-6a09c8768520-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.008841 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwswv\" (UniqueName: \"kubernetes.io/projected/e3385c22-baa0-4261-b498-6a09c8768520-kube-api-access-qwswv\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.013217 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e3385c22-baa0-4261-b498-6a09c8768520\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.091796 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.091849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-config-data\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.091874 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.091907 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9nlz\" (UniqueName: \"kubernetes.io/projected/21bb5d7d-6565-484a-af2d-0edcff2729b3-kube-api-access-l9nlz\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.091944 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-kolla-config\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.092716 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-config-data\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.092759 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/21bb5d7d-6565-484a-af2d-0edcff2729b3-kolla-config\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.097527 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.098561 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/21bb5d7d-6565-484a-af2d-0edcff2729b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.109234 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9nlz\" (UniqueName: \"kubernetes.io/projected/21bb5d7d-6565-484a-af2d-0edcff2729b3-kube-api-access-l9nlz\") pod \"memcached-0\" (UID: \"21bb5d7d-6565-484a-af2d-0edcff2729b3\") " pod="openstack/memcached-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.129228 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:00:17 crc kubenswrapper[4873]: I0219 10:00:17.213519 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:00:18 crc kubenswrapper[4873]: I0219 10:00:18.240258 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:00:18 crc kubenswrapper[4873]: I0219 10:00:18.240344 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.253984 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.254915 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.258395 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-l2t29" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.267643 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.337096 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fb6h\" (UniqueName: \"kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h\") pod \"kube-state-metrics-0\" (UID: \"5224ec80-b354-467f-b660-2d22b9725be0\") " pod="openstack/kube-state-metrics-0" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.439415 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fb6h\" (UniqueName: \"kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h\") pod \"kube-state-metrics-0\" (UID: \"5224ec80-b354-467f-b660-2d22b9725be0\") " pod="openstack/kube-state-metrics-0" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.463973 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fb6h\" (UniqueName: \"kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h\") pod \"kube-state-metrics-0\" (UID: \"5224ec80-b354-467f-b660-2d22b9725be0\") " pod="openstack/kube-state-metrics-0" Feb 19 10:00:19 crc kubenswrapper[4873]: I0219 10:00:19.570440 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.488906 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.496024 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.504079 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.510809 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.511026 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.522480 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.522630 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.522887 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.522998 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.527740 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.528437 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-stpz9" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.562838 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563358 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563436 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz72n\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563477 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563697 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563781 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563824 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.563967 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.564163 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.564233 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666070 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666163 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666192 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666217 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666287 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666309 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666381 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.666421 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz72n\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.667745 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.669203 4873 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.669251 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/668a04d4437b4137f130ddea3fc0a68c22db655664b336b39ceb124bf62a44ab/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.669465 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.669548 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.672771 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.673338 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.674684 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.679252 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.683287 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz72n\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.704758 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.708982 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.863367 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:00:20 crc kubenswrapper[4873]: I0219 10:00:20.912076 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.441950 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vsnt5"] Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.443442 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.445891 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.446786 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l2dvx" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.446964 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.457272 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vsnt5"] Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.478124 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-t5bgp"] Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.480325 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.485765 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-t5bgp"] Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494441 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6v9q\" (UniqueName: \"kubernetes.io/projected/b0ab9d21-0c11-4940-ad43-3e20c46012ad-kube-api-access-f6v9q\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494497 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0ab9d21-0c11-4940-ad43-3e20c46012ad-scripts\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494516 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-ovn-controller-tls-certs\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494538 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-combined-ca-bundle\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494556 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494573 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-log-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.494603 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.596503 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6v9q\" (UniqueName: \"kubernetes.io/projected/b0ab9d21-0c11-4940-ad43-3e20c46012ad-kube-api-access-f6v9q\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.596566 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de2f2331-fc83-420b-9e1b-fe08998cb0ab-scripts\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.596608 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0ab9d21-0c11-4940-ad43-3e20c46012ad-scripts\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.596681 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-ovn-controller-tls-certs\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.596709 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-combined-ca-bundle\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.597125 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.597161 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-log-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.597186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-run\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.597204 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kzd\" (UniqueName: \"kubernetes.io/projected/de2f2331-fc83-420b-9e1b-fe08998cb0ab-kube-api-access-w6kzd\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.597229 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.598429 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-log-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.599617 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0ab9d21-0c11-4940-ad43-3e20c46012ad-scripts\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.600027 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run-ovn\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.600064 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-etc-ovs\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.600381 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-log\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.600535 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-lib\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.600212 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0ab9d21-0c11-4940-ad43-3e20c46012ad-var-run\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.604672 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-combined-ca-bundle\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.608926 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ab9d21-0c11-4940-ad43-3e20c46012ad-ovn-controller-tls-certs\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.623406 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6v9q\" (UniqueName: \"kubernetes.io/projected/b0ab9d21-0c11-4940-ad43-3e20c46012ad-kube-api-access-f6v9q\") pod \"ovn-controller-vsnt5\" (UID: \"b0ab9d21-0c11-4940-ad43-3e20c46012ad\") " pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.701783 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-run\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.701852 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kzd\" (UniqueName: \"kubernetes.io/projected/de2f2331-fc83-420b-9e1b-fe08998cb0ab-kube-api-access-w6kzd\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.701896 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-etc-ovs\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.702155 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-run\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.702443 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-log\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.702518 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-lib\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.702564 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de2f2331-fc83-420b-9e1b-fe08998cb0ab-scripts\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.703733 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-log\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.703933 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-var-lib\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.704080 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/de2f2331-fc83-420b-9e1b-fe08998cb0ab-etc-ovs\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.711278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de2f2331-fc83-420b-9e1b-fe08998cb0ab-scripts\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.716548 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kzd\" (UniqueName: \"kubernetes.io/projected/de2f2331-fc83-420b-9e1b-fe08998cb0ab-kube-api-access-w6kzd\") pod \"ovn-controller-ovs-t5bgp\" (UID: \"de2f2331-fc83-420b-9e1b-fe08998cb0ab\") " pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.768531 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:22 crc kubenswrapper[4873]: I0219 10:00:22.796521 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.318123 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.326439 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.388166 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.435980 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.436083 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.436257 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84czc\" (UniqueName: \"kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.537221 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84czc\" (UniqueName: \"kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.537299 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.537349 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.537945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.538123 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.567366 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84czc\" (UniqueName: \"kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc\") pod \"redhat-marketplace-xw7xl\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:24 crc kubenswrapper[4873]: I0219 10:00:24.688860 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:26 crc kubenswrapper[4873]: W0219 10:00:26.021496 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd736e93a_6a36_458e_a8f4_a9d511530043.slice/crio-9fb66c25e5ae993349fe01e14d2ddec63fb2fa9c9f8fdca8f66724a42a087dcf WatchSource:0}: Error finding container 9fb66c25e5ae993349fe01e14d2ddec63fb2fa9c9f8fdca8f66724a42a087dcf: Status 404 returned error can't find the container with id 9fb66c25e5ae993349fe01e14d2ddec63fb2fa9c9f8fdca8f66724a42a087dcf Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.278802 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" event={"ID":"d736e93a-6a36-458e-a8f4-a9d511530043","Type":"ContainerStarted","Data":"9fb66c25e5ae993349fe01e14d2ddec63fb2fa9c9f8fdca8f66724a42a087dcf"} Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.316713 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.319785 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.324368 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.324449 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.324556 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.324704 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.324862 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8f5jd" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.332082 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369145 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-config\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369481 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369548 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qmv\" (UniqueName: \"kubernetes.io/projected/4574f6e3-d697-424c-a9f1-7b74afb82324-kube-api-access-z7qmv\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369719 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369768 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.369951 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.370071 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.370234 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.467354 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473032 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7qmv\" (UniqueName: \"kubernetes.io/projected/4574f6e3-d697-424c-a9f1-7b74afb82324-kube-api-access-z7qmv\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473088 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473122 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473161 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473199 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473220 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473245 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-config\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473265 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.473903 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.474507 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-config\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.474684 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4574f6e3-d697-424c-a9f1-7b74afb82324-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.474945 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.486765 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.488367 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.489176 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4574f6e3-d697-424c-a9f1-7b74afb82324-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.492475 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7qmv\" (UniqueName: \"kubernetes.io/projected/4574f6e3-d697-424c-a9f1-7b74afb82324-kube-api-access-z7qmv\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.509556 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4574f6e3-d697-424c-a9f1-7b74afb82324\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.516417 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.535929 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.537565 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.540579 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.540743 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.540915 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.541184 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nbv9w" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.566882 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.574609 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.574724 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.574941 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.575075 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877efa5f-4357-4396-8805-729237cd4e8f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.575217 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-config\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.575344 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.575447 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7l8\" (UniqueName: \"kubernetes.io/projected/877efa5f-4357-4396-8805-729237cd4e8f-kube-api-access-dd7l8\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.575553 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.647465 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677651 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677705 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677743 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677785 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877efa5f-4357-4396-8805-729237cd4e8f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677825 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-config\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677863 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677895 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7l8\" (UniqueName: \"kubernetes.io/projected/877efa5f-4357-4396-8805-729237cd4e8f-kube-api-access-dd7l8\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.677941 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.678839 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/877efa5f-4357-4396-8805-729237cd4e8f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.678955 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.680344 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-config\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.680765 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/877efa5f-4357-4396-8805-729237cd4e8f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.683284 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.683768 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.685022 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/877efa5f-4357-4396-8805-729237cd4e8f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.695071 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7l8\" (UniqueName: \"kubernetes.io/projected/877efa5f-4357-4396-8805-729237cd4e8f-kube-api-access-dd7l8\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.702386 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"877efa5f-4357-4396-8805-729237cd4e8f\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:26 crc kubenswrapper[4873]: I0219 10:00:26.878384 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.072495 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.072753 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.073459 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bb9bf987-bjckx_openstack(bea1b96b-f9da-4733-a537-a536ec66edc0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.075029 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" podUID="bea1b96b-f9da-4733-a537-a536ec66edc0" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.218802 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.219050 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.219165 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnswl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57b9d58665-gfr42_openstack(b3a77d5e-b932-466f-a391-983ffef7c5ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:00:27 crc kubenswrapper[4873]: E0219 10:00:27.220445 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" podUID="b3a77d5e-b932-466f-a391-983ffef7c5ae" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.297516 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerStarted","Data":"7661fe6352a716a9db14456953448866e2c9797ab10f540b398fdf6a05d1c0b7"} Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.299003 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" event={"ID":"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7","Type":"ContainerStarted","Data":"f1e538b93b7f75469d9218fb31bc488292aeea03fa4ada6d0bc787cf733da55f"} Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.524023 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.562413 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.571881 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:27 crc kubenswrapper[4873]: W0219 10:00:27.587932 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1b4e4e4_15bf_4c4d_b7c4_bc3029c32964.slice/crio-1acf8a8ce5d5cd8234f816a3c72ac7a8779e66461a9c47b88ba6efbf92c3914d WatchSource:0}: Error finding container 1acf8a8ce5d5cd8234f816a3c72ac7a8779e66461a9c47b88ba6efbf92c3914d: Status 404 returned error can't find the container with id 1acf8a8ce5d5cd8234f816a3c72ac7a8779e66461a9c47b88ba6efbf92c3914d Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.719397 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.804359 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc\") pod \"bea1b96b-f9da-4733-a537-a536ec66edc0\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.804476 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config\") pod \"bea1b96b-f9da-4733-a537-a536ec66edc0\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.804511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ddx\" (UniqueName: \"kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx\") pod \"bea1b96b-f9da-4733-a537-a536ec66edc0\" (UID: \"bea1b96b-f9da-4733-a537-a536ec66edc0\") " Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.805151 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bea1b96b-f9da-4733-a537-a536ec66edc0" (UID: "bea1b96b-f9da-4733-a537-a536ec66edc0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.805630 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config" (OuterVolumeSpecName: "config") pod "bea1b96b-f9da-4733-a537-a536ec66edc0" (UID: "bea1b96b-f9da-4733-a537-a536ec66edc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.808887 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx" (OuterVolumeSpecName: "kube-api-access-r4ddx") pod "bea1b96b-f9da-4733-a537-a536ec66edc0" (UID: "bea1b96b-f9da-4733-a537-a536ec66edc0"). InnerVolumeSpecName "kube-api-access-r4ddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.910552 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.910902 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea1b96b-f9da-4733-a537-a536ec66edc0-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.910917 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ddx\" (UniqueName: \"kubernetes.io/projected/bea1b96b-f9da-4733-a537-a536ec66edc0-kube-api-access-r4ddx\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.922517 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:00:27 crc kubenswrapper[4873]: I0219 10:00:27.961841 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.101842 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vsnt5"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.106933 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.128928 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.134608 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:00:28 crc kubenswrapper[4873]: W0219 10:00:28.145500 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ab9d21_0c11_4940_ad43_3e20c46012ad.slice/crio-251b22c5a16ace43b380a7b8daf6ec436f78cf1385f99b88a681a1a553b6aad3 WatchSource:0}: Error finding container 251b22c5a16ace43b380a7b8daf6ec436f78cf1385f99b88a681a1a553b6aad3: Status 404 returned error can't find the container with id 251b22c5a16ace43b380a7b8daf6ec436f78cf1385f99b88a681a1a553b6aad3 Feb 19 10:00:28 crc kubenswrapper[4873]: W0219 10:00:28.145907 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8568b0bc_e3d1_4e4e_8172_bada186b750a.slice/crio-a80672d41aa2943f5cef88b806f5ebd8b5afa23daa0fded90e827fd2b0faf463 WatchSource:0}: Error finding container a80672d41aa2943f5cef88b806f5ebd8b5afa23daa0fded90e827fd2b0faf463: Status 404 returned error can't find the container with id a80672d41aa2943f5cef88b806f5ebd8b5afa23daa0fded90e827fd2b0faf463 Feb 19 10:00:28 crc kubenswrapper[4873]: W0219 10:00:28.147705 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3385c22_baa0_4261_b498_6a09c8768520.slice/crio-f649ecf344cb5c5d5b12ff4d9ba9127518e13b208a1889cf50a73a268363958b WatchSource:0}: Error finding container f649ecf344cb5c5d5b12ff4d9ba9127518e13b208a1889cf50a73a268363958b: Status 404 returned error can't find the container with id f649ecf344cb5c5d5b12ff4d9ba9127518e13b208a1889cf50a73a268363958b Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.185812 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.214518 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config\") pod \"b3a77d5e-b932-466f-a391-983ffef7c5ae\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.214683 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnswl\" (UniqueName: \"kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl\") pod \"b3a77d5e-b932-466f-a391-983ffef7c5ae\" (UID: \"b3a77d5e-b932-466f-a391-983ffef7c5ae\") " Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.215000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config" (OuterVolumeSpecName: "config") pod "b3a77d5e-b932-466f-a391-983ffef7c5ae" (UID: "b3a77d5e-b932-466f-a391-983ffef7c5ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.220506 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl" (OuterVolumeSpecName: "kube-api-access-nnswl") pod "b3a77d5e-b932-466f-a391-983ffef7c5ae" (UID: "b3a77d5e-b932-466f-a391-983ffef7c5ae"). InnerVolumeSpecName "kube-api-access-nnswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.306917 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"21bb5d7d-6565-484a-af2d-0edcff2729b3","Type":"ContainerStarted","Data":"5f84bcbc933f4db8f25396e941d733635449ed485576a8055e2cb166754301d7"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.312175 4873 generic.go:334] "Generic (PLEG): container finished" podID="16ae739e-2542-4b44-820b-e08570c825dc" containerID="fd5ed21757630d8854c09406757934d793b6ff121a9eac4b399434519950fdf8" exitCode=0 Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.312263 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f645dc-zkgql" event={"ID":"16ae739e-2542-4b44-820b-e08570c825dc","Type":"ContainerDied","Data":"fd5ed21757630d8854c09406757934d793b6ff121a9eac4b399434519950fdf8"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.312281 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f645dc-zkgql" event={"ID":"16ae739e-2542-4b44-820b-e08570c825dc","Type":"ContainerStarted","Data":"4d2d1d654b551f3e3bb135ca0685d9199fac6f3f5d189b5a1555a413a88b5ddc"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.315837 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnswl\" (UniqueName: \"kubernetes.io/projected/b3a77d5e-b932-466f-a391-983ffef7c5ae-kube-api-access-nnswl\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.315859 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a77d5e-b932-466f-a391-983ffef7c5ae-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.315959 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.315950 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b9d58665-gfr42" event={"ID":"b3a77d5e-b932-466f-a391-983ffef7c5ae","Type":"ContainerDied","Data":"909aed776126287df1a7798864d3d0881f670c7df611ac6ef1496c9f130ee423"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.328003 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2","Type":"ContainerStarted","Data":"309ae73cd35faf85f8404bbe1f172f13da31fd15960c65b167f5d2c3610e2a86"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.331449 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" event={"ID":"bea1b96b-f9da-4733-a537-a536ec66edc0","Type":"ContainerDied","Data":"68a3e971085bdee9176e4f2a37706001ac7c9cd7629296c14068391c6eb2512d"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.331547 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb9bf987-bjckx" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.337272 4873 generic.go:334] "Generic (PLEG): container finished" podID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerID="505d554bc6a88454d3df3439ef0d84b488679b7ec3847d5f4302a334f4220e6d" exitCode=0 Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.337343 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" event={"ID":"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7","Type":"ContainerDied","Data":"505d554bc6a88454d3df3439ef0d84b488679b7ec3847d5f4302a334f4220e6d"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.347873 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.350234 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerStarted","Data":"a80672d41aa2943f5cef88b806f5ebd8b5afa23daa0fded90e827fd2b0faf463"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.382332 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerStarted","Data":"ef991a861997941a147c9b5a0da440f69f41ed8b1c1a849520b30accb3784df6"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.409307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vsnt5" event={"ID":"b0ab9d21-0c11-4940-ad43-3e20c46012ad","Type":"ContainerStarted","Data":"251b22c5a16ace43b380a7b8daf6ec436f78cf1385f99b88a681a1a553b6aad3"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.419950 4873 generic.go:334] "Generic (PLEG): container finished" podID="d736e93a-6a36-458e-a8f4-a9d511530043" containerID="11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3" exitCode=0 Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.420012 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" event={"ID":"d736e93a-6a36-458e-a8f4-a9d511530043","Type":"ContainerDied","Data":"11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.422505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5224ec80-b354-467f-b660-2d22b9725be0","Type":"ContainerStarted","Data":"e0fab87f6d902a58d41b4b35cef6645c9197dee8f59fc04defe1aac4065e472b"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.423920 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3385c22-baa0-4261-b498-6a09c8768520","Type":"ContainerStarted","Data":"f649ecf344cb5c5d5b12ff4d9ba9127518e13b208a1889cf50a73a268363958b"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.424678 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964","Type":"ContainerStarted","Data":"1acf8a8ce5d5cd8234f816a3c72ac7a8779e66461a9c47b88ba6efbf92c3914d"} Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.501035 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.514416 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b9d58665-gfr42"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.530363 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.537034 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bb9bf987-bjckx"] Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.549989 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:00:28 crc kubenswrapper[4873]: E0219 10:00:28.639598 4873 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 10:00:28 crc kubenswrapper[4873]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:00:28 crc kubenswrapper[4873]: > podSandboxID="f1e538b93b7f75469d9218fb31bc488292aeea03fa4ada6d0bc787cf733da55f" Feb 19 10:00:28 crc kubenswrapper[4873]: E0219 10:00:28.639772 4873 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 10:00:28 crc kubenswrapper[4873]: container &Container{Name:dnsmasq-dns,Image:38.102.83.20:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h65fh56h6fh87h85h57h76h5b7h94hffh649hfbh8ch5bch56fh5c5hbh86hf9h99h5dch95h66hd5h555h566h646h546h79h9dh55dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pcf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7569d6d65f-54dks_openstack(040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:00:28 crc kubenswrapper[4873]: > logger="UnhandledError" Feb 19 10:00:28 crc kubenswrapper[4873]: E0219 10:00:28.640957 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.757614 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.833017 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config\") pod \"16ae739e-2542-4b44-820b-e08570c825dc\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.833254 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc\") pod \"16ae739e-2542-4b44-820b-e08570c825dc\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.833308 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlb64\" (UniqueName: \"kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64\") pod \"16ae739e-2542-4b44-820b-e08570c825dc\" (UID: \"16ae739e-2542-4b44-820b-e08570c825dc\") " Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.839395 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64" (OuterVolumeSpecName: "kube-api-access-jlb64") pod "16ae739e-2542-4b44-820b-e08570c825dc" (UID: "16ae739e-2542-4b44-820b-e08570c825dc"). InnerVolumeSpecName "kube-api-access-jlb64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.855978 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16ae739e-2542-4b44-820b-e08570c825dc" (UID: "16ae739e-2542-4b44-820b-e08570c825dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.857563 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config" (OuterVolumeSpecName: "config") pod "16ae739e-2542-4b44-820b-e08570c825dc" (UID: "16ae739e-2542-4b44-820b-e08570c825dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.935204 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.935244 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16ae739e-2542-4b44-820b-e08570c825dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:28 crc kubenswrapper[4873]: I0219 10:00:28.935257 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlb64\" (UniqueName: \"kubernetes.io/projected/16ae739e-2542-4b44-820b-e08570c825dc-kube-api-access-jlb64\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.434202 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.439085 4873 generic.go:334] "Generic (PLEG): container finished" podID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerID="dcfd39b1c5289cd4a8556f216e399630fd4927789a8b353b5566c5719ea3fcee" exitCode=0 Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.439131 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerDied","Data":"dcfd39b1c5289cd4a8556f216e399630fd4927789a8b353b5566c5719ea3fcee"} Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.440852 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-684f645dc-zkgql" event={"ID":"16ae739e-2542-4b44-820b-e08570c825dc","Type":"ContainerDied","Data":"4d2d1d654b551f3e3bb135ca0685d9199fac6f3f5d189b5a1555a413a88b5ddc"} Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.440879 4873 scope.go:117] "RemoveContainer" containerID="fd5ed21757630d8854c09406757934d793b6ff121a9eac4b399434519950fdf8" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.440906 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-684f645dc-zkgql" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.445559 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" event={"ID":"d736e93a-6a36-458e-a8f4-a9d511530043","Type":"ContainerStarted","Data":"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750"} Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.445683 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.446935 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"877efa5f-4357-4396-8805-729237cd4e8f","Type":"ContainerStarted","Data":"5b8dfcc8863027970b2baffa1cde2eca756b71ec863a27bcadcbef578596d8c4"} Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.448888 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerStarted","Data":"3d0b7f98084ff77ff34a64d3b9fb32fc7993ea571d51b6cb0b24962f0fd5c9ef"} Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.477451 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" podStartSLOduration=15.759529305000001 podStartE2EDuration="17.477434931s" podCreationTimestamp="2026-02-19 10:00:12 +0000 UTC" firstStartedPulling="2026-02-19 10:00:26.032090755 +0000 UTC m=+935.321522403" lastFinishedPulling="2026-02-19 10:00:27.749996401 +0000 UTC m=+937.039428029" observedRunningTime="2026-02-19 10:00:29.473968307 +0000 UTC m=+938.763399955" watchObservedRunningTime="2026-02-19 10:00:29.477434931 +0000 UTC m=+938.766866569" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.495406 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a77d5e-b932-466f-a391-983ffef7c5ae" path="/var/lib/kubelet/pods/b3a77d5e-b932-466f-a391-983ffef7c5ae/volumes" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.495861 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea1b96b-f9da-4733-a537-a536ec66edc0" path="/var/lib/kubelet/pods/bea1b96b-f9da-4733-a537-a536ec66edc0/volumes" Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.527150 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.534485 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-684f645dc-zkgql"] Feb 19 10:00:29 crc kubenswrapper[4873]: I0219 10:00:29.583372 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-t5bgp"] Feb 19 10:00:30 crc kubenswrapper[4873]: I0219 10:00:30.454868 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4574f6e3-d697-424c-a9f1-7b74afb82324","Type":"ContainerStarted","Data":"1e894480148b571b97ed6e0b1b55886de852c96f7c290e0516e153d29b3da37a"} Feb 19 10:00:30 crc kubenswrapper[4873]: I0219 10:00:30.455895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t5bgp" event={"ID":"de2f2331-fc83-420b-9e1b-fe08998cb0ab","Type":"ContainerStarted","Data":"25bf411f87949538bc230bd0926ab6bb33b5472b677c8f8ea482f04a7149e1b4"} Feb 19 10:00:31 crc kubenswrapper[4873]: I0219 10:00:31.498889 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ae739e-2542-4b44-820b-e08570c825dc" path="/var/lib/kubelet/pods/16ae739e-2542-4b44-820b-e08570c825dc/volumes" Feb 19 10:00:33 crc kubenswrapper[4873]: I0219 10:00:33.177349 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:00:33 crc kubenswrapper[4873]: I0219 10:00:33.227759 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:36 crc kubenswrapper[4873]: I0219 10:00:36.516807 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" event={"ID":"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7","Type":"ContainerStarted","Data":"097bfd5b65d03aa6a20da3832add63d24bee8dabfbbd70df5da33cb07c8feac3"} Feb 19 10:00:36 crc kubenswrapper[4873]: I0219 10:00:36.517532 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:36 crc kubenswrapper[4873]: I0219 10:00:36.517294 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="dnsmasq-dns" containerID="cri-o://097bfd5b65d03aa6a20da3832add63d24bee8dabfbbd70df5da33cb07c8feac3" gracePeriod=10 Feb 19 10:00:36 crc kubenswrapper[4873]: I0219 10:00:36.539088 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" podStartSLOduration=23.861208922 podStartE2EDuration="24.539041486s" podCreationTimestamp="2026-02-19 10:00:12 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.073299335 +0000 UTC m=+936.362730973" lastFinishedPulling="2026-02-19 10:00:27.751131899 +0000 UTC m=+937.040563537" observedRunningTime="2026-02-19 10:00:36.535673053 +0000 UTC m=+945.825104761" watchObservedRunningTime="2026-02-19 10:00:36.539041486 +0000 UTC m=+945.828473134" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.527975 4873 generic.go:334] "Generic (PLEG): container finished" podID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerID="8997664180cc4a89bc96536538fcab96c359ac683add2f689bc03e6ec23ed7ec" exitCode=0 Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.528243 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerDied","Data":"8997664180cc4a89bc96536538fcab96c359ac683add2f689bc03e6ec23ed7ec"} Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.530579 4873 generic.go:334] "Generic (PLEG): container finished" podID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerID="097bfd5b65d03aa6a20da3832add63d24bee8dabfbbd70df5da33cb07c8feac3" exitCode=0 Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.530615 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" event={"ID":"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7","Type":"ContainerDied","Data":"097bfd5b65d03aa6a20da3832add63d24bee8dabfbbd70df5da33cb07c8feac3"} Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.555338 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:00:37 crc kubenswrapper[4873]: E0219 10:00:37.555803 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ae739e-2542-4b44-820b-e08570c825dc" containerName="init" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.555830 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ae739e-2542-4b44-820b-e08570c825dc" containerName="init" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.556033 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ae739e-2542-4b44-820b-e08570c825dc" containerName="init" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.557806 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.582929 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.611423 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4z6\" (UniqueName: \"kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.611683 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.611872 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.713079 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4z6\" (UniqueName: \"kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.713196 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.713274 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.713947 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.714618 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.756412 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4z6\" (UniqueName: \"kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6\") pod \"redhat-operators-lmkgp\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:37 crc kubenswrapper[4873]: I0219 10:00:37.902160 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:41.946896 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:41.951528 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:41.970919 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.090796 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2hw\" (UniqueName: \"kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.090907 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.090949 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.192710 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v2hw\" (UniqueName: \"kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.192825 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.192870 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.193493 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.193531 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.218514 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v2hw\" (UniqueName: \"kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw\") pod \"certified-operators-sjwbx\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:42.279536 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:45.599900 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t5bgp" event={"ID":"de2f2331-fc83-420b-9e1b-fe08998cb0ab","Type":"ContainerStarted","Data":"d33b00dc248cbb8c3025337ed0ac581a594ca6f6603cb02e194746a85f320725"} Feb 19 10:00:45 crc kubenswrapper[4873]: E0219 10:00:45.810077 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 10:00:45 crc kubenswrapper[4873]: E0219 10:00:45.810135 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 10:00:45 crc kubenswrapper[4873]: E0219 10:00:45.810254 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fb6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5224ec80-b354-467f-b660-2d22b9725be0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:00:45 crc kubenswrapper[4873]: E0219 10:00:45.811604 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="5224ec80-b354-467f-b660-2d22b9725be0" Feb 19 10:00:45 crc kubenswrapper[4873]: I0219 10:00:45.869568 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.056771 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc\") pod \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.057040 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config\") pod \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.057446 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pcf5\" (UniqueName: \"kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5\") pod \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\" (UID: \"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7\") " Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.066084 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5" (OuterVolumeSpecName: "kube-api-access-6pcf5") pod "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" (UID: "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7"). InnerVolumeSpecName "kube-api-access-6pcf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.159711 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pcf5\" (UniqueName: \"kubernetes.io/projected/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-kube-api-access-6pcf5\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.196266 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config" (OuterVolumeSpecName: "config") pod "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" (UID: "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.210787 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" (UID: "040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.261280 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.261308 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.450719 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:00:46 crc kubenswrapper[4873]: W0219 10:00:46.489317 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd115a791_c703_4c6e_91e5_8f3ab9608277.slice/crio-164021a9f61ae1ea4080d3f61899481761a6227bd5066ab4023318b78119f680 WatchSource:0}: Error finding container 164021a9f61ae1ea4080d3f61899481761a6227bd5066ab4023318b78119f680: Status 404 returned error can't find the container with id 164021a9f61ae1ea4080d3f61899481761a6227bd5066ab4023318b78119f680 Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.562203 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.610737 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" event={"ID":"040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7","Type":"ContainerDied","Data":"f1e538b93b7f75469d9218fb31bc488292aeea03fa4ada6d0bc787cf733da55f"} Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.611886 4873 scope.go:117] "RemoveContainer" containerID="097bfd5b65d03aa6a20da3832add63d24bee8dabfbbd70df5da33cb07c8feac3" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.612014 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerStarted","Data":"164021a9f61ae1ea4080d3f61899481761a6227bd5066ab4023318b78119f680"} Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.610808 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.613164 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerStarted","Data":"6ceab5ac0f605f81c09fcddc9dd0cf16ee123c3875c132c11dd895633c1f969a"} Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.615422 4873 generic.go:334] "Generic (PLEG): container finished" podID="de2f2331-fc83-420b-9e1b-fe08998cb0ab" containerID="d33b00dc248cbb8c3025337ed0ac581a594ca6f6603cb02e194746a85f320725" exitCode=0 Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.616552 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t5bgp" event={"ID":"de2f2331-fc83-420b-9e1b-fe08998cb0ab","Type":"ContainerDied","Data":"d33b00dc248cbb8c3025337ed0ac581a594ca6f6603cb02e194746a85f320725"} Feb 19 10:00:46 crc kubenswrapper[4873]: E0219 10:00:46.619385 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5224ec80-b354-467f-b660-2d22b9725be0" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.645289 4873 scope.go:117] "RemoveContainer" containerID="505d554bc6a88454d3df3439ef0d84b488679b7ec3847d5f4302a334f4220e6d" Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.684260 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:46 crc kubenswrapper[4873]: I0219 10:00:46.692188 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7569d6d65f-54dks"] Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.499250 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" path="/var/lib/kubelet/pods/040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7/volumes" Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.628710 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4574f6e3-d697-424c-a9f1-7b74afb82324","Type":"ContainerStarted","Data":"20836d279f06285002cfbb0c8639e85b75eb8d99b8e51e3132bdeffd61ebf0bd"} Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.632513 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"21bb5d7d-6565-484a-af2d-0edcff2729b3","Type":"ContainerStarted","Data":"14c21ce2634b1dc213dc0c0ab1a58e021e2dc9c23001d06f62c02e78efddce27"} Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.633333 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.639111 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"877efa5f-4357-4396-8805-729237cd4e8f","Type":"ContainerStarted","Data":"71594272d5055445ca8570411417082fd82727b5b63787509b7b2e366874f623"} Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.643921 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3385c22-baa0-4261-b498-6a09c8768520","Type":"ContainerStarted","Data":"6e3e72cbe1386e2212554c075c732c997cd9eca5e8e80b367a0a82589ed0ceb4"} Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.647238 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964","Type":"ContainerStarted","Data":"f86fddfaec0875e79580c640fa9abc158961398e84a30e8571f5d4f75941dd57"} Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.649603 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.468808351 podStartE2EDuration="31.649591921s" podCreationTimestamp="2026-02-19 10:00:16 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.943205803 +0000 UTC m=+937.232637441" lastFinishedPulling="2026-02-19 10:00:35.123989333 +0000 UTC m=+944.413421011" observedRunningTime="2026-02-19 10:00:47.647681574 +0000 UTC m=+956.937113212" watchObservedRunningTime="2026-02-19 10:00:47.649591921 +0000 UTC m=+956.939023549" Feb 19 10:00:47 crc kubenswrapper[4873]: I0219 10:00:47.738425 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7569d6d65f-54dks" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: i/o timeout" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.127213 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:00:48 crc kubenswrapper[4873]: E0219 10:00:48.127756 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="init" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.127774 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="init" Feb 19 10:00:48 crc kubenswrapper[4873]: E0219 10:00:48.127790 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="dnsmasq-dns" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.127796 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="dnsmasq-dns" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.127932 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="040a64d2-c5d3-44b7-9ffd-8bd12c3d68b7" containerName="dnsmasq-dns" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.131126 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.148827 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.240416 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.240470 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.240504 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.240860 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.240906 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded" gracePeriod=600 Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.302475 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.302563 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.302614 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772f4\" (UniqueName: \"kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.403743 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.403854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.403893 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772f4\" (UniqueName: \"kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.404176 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.404874 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.495079 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772f4\" (UniqueName: \"kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4\") pod \"community-operators-9hfhg\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.656338 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerStarted","Data":"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb"} Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.658768 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vsnt5" event={"ID":"b0ab9d21-0c11-4940-ad43-3e20c46012ad","Type":"ContainerStarted","Data":"c5457889bdf77cfda38fa2f89068415a1f93d64bf677bfcf5b9afd8ebe657440"} Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.658857 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vsnt5" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.660216 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2","Type":"ContainerStarted","Data":"57567e0456fd2d45349518902ef81c44525ff9ae50ef5a0fa8ed4d2a66526532"} Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.665462 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerStarted","Data":"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f"} Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.680615 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded" exitCode=0 Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.680788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded"} Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.680852 4873 scope.go:117] "RemoveContainer" containerID="025da7fd171f987961d862fe4ebef489eca80227003392ad78806aa501904663" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.717964 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vsnt5" podStartSLOduration=18.792274133 podStartE2EDuration="26.71794504s" podCreationTimestamp="2026-02-19 10:00:22 +0000 UTC" firstStartedPulling="2026-02-19 10:00:28.148159694 +0000 UTC m=+937.437591322" lastFinishedPulling="2026-02-19 10:00:36.073830571 +0000 UTC m=+945.363262229" observedRunningTime="2026-02-19 10:00:48.712906795 +0000 UTC m=+958.002338433" watchObservedRunningTime="2026-02-19 10:00:48.71794504 +0000 UTC m=+958.007376678" Feb 19 10:00:48 crc kubenswrapper[4873]: I0219 10:00:48.744189 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.699420 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t5bgp" event={"ID":"de2f2331-fc83-420b-9e1b-fe08998cb0ab","Type":"ContainerStarted","Data":"6120178479b9fb316a67f93c74ba7e6f2722b43b8d9831ee092b8a0f17b21fb1"} Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.715595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerStarted","Data":"7b07d5d4936c52da51ddc101b9f0cc93c881b6b0ec5f359e20ad961f5451f0a9"} Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.729387 4873 generic.go:334] "Generic (PLEG): container finished" podID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerID="c3c32c24ff9ddc9c878bf60c4e06dc7e24a6feab7886836d8ecf2510f7a2f602" exitCode=0 Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.729455 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerDied","Data":"c3c32c24ff9ddc9c878bf60c4e06dc7e24a6feab7886836d8ecf2510f7a2f602"} Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.732688 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerStarted","Data":"c66f926167963658837072f9619be2e502f49f78fdee38e1038232ea2b12e0fe"} Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.744807 4873 generic.go:334] "Generic (PLEG): container finished" podID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerID="94b88ff2b105134857189f15d457bd06c0d3247317f372402ca302f0541cc41d" exitCode=0 Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.745818 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerDied","Data":"94b88ff2b105134857189f15d457bd06c0d3247317f372402ca302f0541cc41d"} Feb 19 10:00:49 crc kubenswrapper[4873]: I0219 10:00:49.799719 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xw7xl" podStartSLOduration=9.387809768 podStartE2EDuration="25.799703109s" podCreationTimestamp="2026-02-19 10:00:24 +0000 UTC" firstStartedPulling="2026-02-19 10:00:29.871152504 +0000 UTC m=+939.160584142" lastFinishedPulling="2026-02-19 10:00:46.283045845 +0000 UTC m=+955.572477483" observedRunningTime="2026-02-19 10:00:49.789286812 +0000 UTC m=+959.078718450" watchObservedRunningTime="2026-02-19 10:00:49.799703109 +0000 UTC m=+959.089134747" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.061317 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:00:50 crc kubenswrapper[4873]: W0219 10:00:50.069031 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf20bcc70_bf30_4949_951a_b36d083d205f.slice/crio-d368a20b1ccdc9da62de20626c013e559624ef41b56690236309ce9d1a2a14ac WatchSource:0}: Error finding container d368a20b1ccdc9da62de20626c013e559624ef41b56690236309ce9d1a2a14ac: Status 404 returned error can't find the container with id d368a20b1ccdc9da62de20626c013e559624ef41b56690236309ce9d1a2a14ac Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.752685 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"877efa5f-4357-4396-8805-729237cd4e8f","Type":"ContainerStarted","Data":"68f113e2f0c70661921c5b2f008496a1db957fc6375a87458aa5aeb82be012c9"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.754533 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t5bgp" event={"ID":"de2f2331-fc83-420b-9e1b-fe08998cb0ab","Type":"ContainerStarted","Data":"5924680f305955cfe970df85dca208e136dd35437f00770b44c5859330aba705"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.754683 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.754706 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.755500 4873 generic.go:334] "Generic (PLEG): container finished" podID="f20bcc70-bf30-4949-951a-b36d083d205f" containerID="eba7f3213f6b8b4ed8b7cfa357b6cb425beb34251cc11a43d090e87e4e2033e9" exitCode=0 Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.755543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerDied","Data":"eba7f3213f6b8b4ed8b7cfa357b6cb425beb34251cc11a43d090e87e4e2033e9"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.755558 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerStarted","Data":"d368a20b1ccdc9da62de20626c013e559624ef41b56690236309ce9d1a2a14ac"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.757691 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.760138 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4574f6e3-d697-424c-a9f1-7b74afb82324","Type":"ContainerStarted","Data":"93d09717c75b6be2c36343780e551d316d65d55535537a0805a9c6228cfa4fdc"} Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.775915 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.764270632 podStartE2EDuration="25.775894016s" podCreationTimestamp="2026-02-19 10:00:25 +0000 UTC" firstStartedPulling="2026-02-19 10:00:28.557746708 +0000 UTC m=+937.847178346" lastFinishedPulling="2026-02-19 10:00:49.569369902 +0000 UTC m=+958.858801730" observedRunningTime="2026-02-19 10:00:50.774496991 +0000 UTC m=+960.063928629" watchObservedRunningTime="2026-02-19 10:00:50.775894016 +0000 UTC m=+960.065325654" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.795837 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-t5bgp" podStartSLOduration=22.601987528 podStartE2EDuration="28.795819447s" podCreationTimestamp="2026-02-19 10:00:22 +0000 UTC" firstStartedPulling="2026-02-19 10:00:29.88152922 +0000 UTC m=+939.170960858" lastFinishedPulling="2026-02-19 10:00:36.075361129 +0000 UTC m=+945.364792777" observedRunningTime="2026-02-19 10:00:50.793961391 +0000 UTC m=+960.083393029" watchObservedRunningTime="2026-02-19 10:00:50.795819447 +0000 UTC m=+960.085251085" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.831739 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.115661104 podStartE2EDuration="25.831725182s" podCreationTimestamp="2026-02-19 10:00:25 +0000 UTC" firstStartedPulling="2026-02-19 10:00:29.88315806 +0000 UTC m=+939.172589688" lastFinishedPulling="2026-02-19 10:00:49.599222128 +0000 UTC m=+958.888653766" observedRunningTime="2026-02-19 10:00:50.828140843 +0000 UTC m=+960.117572481" watchObservedRunningTime="2026-02-19 10:00:50.831725182 +0000 UTC m=+960.121156820" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.879114 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:50 crc kubenswrapper[4873]: I0219 10:00:50.925670 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:51 crc kubenswrapper[4873]: I0219 10:00:51.648023 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:51 crc kubenswrapper[4873]: I0219 10:00:51.769340 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerStarted","Data":"ddf2f2a59be91b05f4a24a87c32978d8341502e53c5702e5bb9180a41d34ff6f"} Feb 19 10:00:51 crc kubenswrapper[4873]: I0219 10:00:51.772743 4873 generic.go:334] "Generic (PLEG): container finished" podID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerID="26c7c425cb063e2664738a405a0bc48114123eb086c08ef61f83247d0e893cd3" exitCode=0 Feb 19 10:00:51 crc kubenswrapper[4873]: I0219 10:00:51.772781 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerDied","Data":"26c7c425cb063e2664738a405a0bc48114123eb086c08ef61f83247d0e893cd3"} Feb 19 10:00:51 crc kubenswrapper[4873]: I0219 10:00:51.774230 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.214732 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.782712 4873 generic.go:334] "Generic (PLEG): container finished" podID="f20bcc70-bf30-4949-951a-b36d083d205f" containerID="35d25dc52ac46707a9ed600e3f35ef54061a263d12693fd1d59a80b0fdce1fe0" exitCode=0 Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.782776 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerDied","Data":"35d25dc52ac46707a9ed600e3f35ef54061a263d12693fd1d59a80b0fdce1fe0"} Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.784619 4873 generic.go:334] "Generic (PLEG): container finished" podID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerID="ddf2f2a59be91b05f4a24a87c32978d8341502e53c5702e5bb9180a41d34ff6f" exitCode=0 Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.784643 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerDied","Data":"ddf2f2a59be91b05f4a24a87c32978d8341502e53c5702e5bb9180a41d34ff6f"} Feb 19 10:00:52 crc kubenswrapper[4873]: I0219 10:00:52.841000 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.162540 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.163990 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.165969 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.178922 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.303240 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.303291 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.303389 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.303441 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdz7\" (UniqueName: \"kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.370174 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-djxfb"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.371150 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.374233 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.404577 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.404634 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdz7\" (UniqueName: \"kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.404724 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.404762 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.405766 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.405815 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.406474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.409641 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-djxfb"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.431865 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdz7\" (UniqueName: \"kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7\") pod \"dnsmasq-dns-5f8656d65-jtnp6\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.479256 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508540 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508587 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-combined-ca-bundle\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508629 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovn-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508657 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7b9k\" (UniqueName: \"kubernetes.io/projected/888c3336-cd8a-4bf2-805f-6b473fb272f4-kube-api-access-z7b9k\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508730 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c3336-cd8a-4bf2-805f-6b473fb272f4-config\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.508760 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovs-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610574 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c3336-cd8a-4bf2-805f-6b473fb272f4-config\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovs-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610735 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610760 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-combined-ca-bundle\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610810 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovn-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.610849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7b9k\" (UniqueName: \"kubernetes.io/projected/888c3336-cd8a-4bf2-805f-6b473fb272f4-kube-api-access-z7b9k\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.613648 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovs-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.615894 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c3336-cd8a-4bf2-805f-6b473fb272f4-config\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.615972 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/888c3336-cd8a-4bf2-805f-6b473fb272f4-ovn-rundir\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.618905 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.619032 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c3336-cd8a-4bf2-805f-6b473fb272f4-combined-ca-bundle\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.648130 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.649538 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7b9k\" (UniqueName: \"kubernetes.io/projected/888c3336-cd8a-4bf2-805f-6b473fb272f4-kube-api-access-z7b9k\") pod \"ovn-controller-metrics-djxfb\" (UID: \"888c3336-cd8a-4bf2-805f-6b473fb272f4\") " pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.691442 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-djxfb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.770974 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.847678 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.872891 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.889412 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.890674 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.895382 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 10:00:53 crc kubenswrapper[4873]: I0219 10:00:53.907710 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.036165 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.036206 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sqkb\" (UniqueName: \"kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.036247 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.036314 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.036343 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.098077 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.099385 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.100971 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.101220 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9vxxd" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.101253 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.101527 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.138053 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.138126 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.138171 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.138195 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sqkb\" (UniqueName: \"kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.138224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.139079 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.139081 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.139213 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.139616 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.139941 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.163832 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sqkb\" (UniqueName: \"kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb\") pod \"dnsmasq-dns-767bbb56f-v5bpp\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.218312 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240205 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88p26\" (UniqueName: \"kubernetes.io/projected/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-kube-api-access-88p26\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240288 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-scripts\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240319 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240428 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240457 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.240483 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-config\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341705 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-scripts\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341749 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341843 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341870 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341890 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-config\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341934 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.341957 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88p26\" (UniqueName: \"kubernetes.io/projected/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-kube-api-access-88p26\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.342889 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-scripts\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.343145 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.344511 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-config\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.349693 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.350467 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.352378 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.392145 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88p26\" (UniqueName: \"kubernetes.io/projected/bd6df8e5-8bc5-4bd5-b466-a90642932cc2-kube-api-access-88p26\") pod \"ovn-northd-0\" (UID: \"bd6df8e5-8bc5-4bd5-b466-a90642932cc2\") " pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.412179 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.689214 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.689477 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.736791 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.815079 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.825749 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-djxfb"] Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.865499 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.961328 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:00:54 crc kubenswrapper[4873]: W0219 10:00:54.965282 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d0de876_c87f_4760_b06f_87b8ff7e5588.slice/crio-27513f9246dbe172eded455fb97103ca558e42e17c318279c0c8d3c8528981cd WatchSource:0}: Error finding container 27513f9246dbe172eded455fb97103ca558e42e17c318279c0c8d3c8528981cd: Status 404 returned error can't find the container with id 27513f9246dbe172eded455fb97103ca558e42e17c318279c0c8d3c8528981cd Feb 19 10:00:54 crc kubenswrapper[4873]: I0219 10:00:54.972700 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:00:54 crc kubenswrapper[4873]: W0219 10:00:54.972801 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6df8e5_8bc5_4bd5_b466_a90642932cc2.slice/crio-bd1e7c563ea6665a05a01dfb3cd5c188624daab075cd1165d88e9b11c768e91a WatchSource:0}: Error finding container bd1e7c563ea6665a05a01dfb3cd5c188624daab075cd1165d88e9b11c768e91a: Status 404 returned error can't find the container with id bd1e7c563ea6665a05a01dfb3cd5c188624daab075cd1165d88e9b11c768e91a Feb 19 10:00:55 crc kubenswrapper[4873]: I0219 10:00:55.811807 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-djxfb" event={"ID":"888c3336-cd8a-4bf2-805f-6b473fb272f4","Type":"ContainerStarted","Data":"6de985f9d9630982be09020970ef7e8fde590171a64d626fb0b58c8f59b3bbc4"} Feb 19 10:00:55 crc kubenswrapper[4873]: I0219 10:00:55.814348 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerStarted","Data":"e25bfdf167e531e407faddba4519081a8819c0f481dad1ec82bb12868d58279f"} Feb 19 10:00:55 crc kubenswrapper[4873]: I0219 10:00:55.815421 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" event={"ID":"d4069c7b-b867-4c6b-b5dd-91529a59d01c","Type":"ContainerStarted","Data":"be25608adf6f38eb11a1d8fdb4fb6018bdff1784849dbf4c40f2c123fef01c50"} Feb 19 10:00:55 crc kubenswrapper[4873]: I0219 10:00:55.816783 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" event={"ID":"7d0de876-c87f-4760-b06f-87b8ff7e5588","Type":"ContainerStarted","Data":"27513f9246dbe172eded455fb97103ca558e42e17c318279c0c8d3c8528981cd"} Feb 19 10:00:55 crc kubenswrapper[4873]: I0219 10:00:55.818262 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd6df8e5-8bc5-4bd5-b466-a90642932cc2","Type":"ContainerStarted","Data":"bd1e7c563ea6665a05a01dfb3cd5c188624daab075cd1165d88e9b11c768e91a"} Feb 19 10:00:58 crc kubenswrapper[4873]: I0219 10:00:58.324984 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:00:58 crc kubenswrapper[4873]: I0219 10:00:58.325584 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xw7xl" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="registry-server" containerID="cri-o://c66f926167963658837072f9619be2e502f49f78fdee38e1038232ea2b12e0fe" gracePeriod=2 Feb 19 10:00:58 crc kubenswrapper[4873]: I0219 10:00:58.862167 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sjwbx" podStartSLOduration=13.408298391 podStartE2EDuration="17.862141391s" podCreationTimestamp="2026-02-19 10:00:41 +0000 UTC" firstStartedPulling="2026-02-19 10:00:49.74904034 +0000 UTC m=+959.038471978" lastFinishedPulling="2026-02-19 10:00:54.20288334 +0000 UTC m=+963.492314978" observedRunningTime="2026-02-19 10:00:58.85924955 +0000 UTC m=+968.148681218" watchObservedRunningTime="2026-02-19 10:00:58.862141391 +0000 UTC m=+968.151573089" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.655934 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.720017 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.722833 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.738696 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.746722 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.746775 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcpc\" (UniqueName: \"kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.746800 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.746826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.746899 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.854051 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.854907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.855040 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.855070 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrcpc\" (UniqueName: \"kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.855089 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.855189 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.855867 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.856165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.856294 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:00:59 crc kubenswrapper[4873]: I0219 10:00:59.898174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrcpc\" (UniqueName: \"kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc\") pod \"dnsmasq-dns-7f5d85f6c-9z7rp\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.056768 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.758133 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.764266 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.766557 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.766693 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-lrs6j" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.766884 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.767907 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.809153 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.868681 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b21a02-7162-42ca-84cf-e0fa36b04a22-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.868740 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdhk\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-kube-api-access-5gdhk\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.868765 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.868903 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.869000 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-cache\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.869031 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-lock\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.869246 4873 generic.go:334] "Generic (PLEG): container finished" podID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerID="c66f926167963658837072f9619be2e502f49f78fdee38e1038232ea2b12e0fe" exitCode=0 Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.869290 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerDied","Data":"c66f926167963658837072f9619be2e502f49f78fdee38e1038232ea2b12e0fe"} Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.969978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970047 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-cache\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970067 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-lock\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970094 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b21a02-7162-42ca-84cf-e0fa36b04a22-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970135 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdhk\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-kube-api-access-5gdhk\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970149 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: E0219 10:01:00.970324 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:00 crc kubenswrapper[4873]: E0219 10:01:00.970338 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:00 crc kubenswrapper[4873]: E0219 10:01:00.970379 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:01.470364286 +0000 UTC m=+970.759795924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.970797 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.971540 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-cache\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.971970 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3b21a02-7162-42ca-84cf-e0fa36b04a22-lock\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:00 crc kubenswrapper[4873]: I0219 10:01:00.978527 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b21a02-7162-42ca-84cf-e0fa36b04a22-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.054936 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdhk\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-kube-api-access-5gdhk\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.059319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.323791 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mx6qq"] Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.324829 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.326274 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.326855 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.327027 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.340543 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mx6qq"] Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377202 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377242 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377304 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klgc\" (UniqueName: \"kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377332 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377399 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377468 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.377526 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.478403 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.478679 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:01 crc kubenswrapper[4873]: E0219 10:01:01.478841 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:01 crc kubenswrapper[4873]: E0219 10:01:01.478858 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:01 crc kubenswrapper[4873]: E0219 10:01:01.478896 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:02.478880458 +0000 UTC m=+971.768312096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479124 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479296 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479484 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479667 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479808 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.479950 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5klgc\" (UniqueName: \"kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.480129 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.480801 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.480950 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.485436 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.486139 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.498823 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.514754 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5klgc\" (UniqueName: \"kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc\") pod \"swift-ring-rebalance-mx6qq\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:01 crc kubenswrapper[4873]: I0219 10:01:01.655204 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.079431 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.194151 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84czc\" (UniqueName: \"kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc\") pod \"8568b0bc-e3d1-4e4e-8172-bada186b750a\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.194334 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content\") pod \"8568b0bc-e3d1-4e4e-8172-bada186b750a\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.194456 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities\") pod \"8568b0bc-e3d1-4e4e-8172-bada186b750a\" (UID: \"8568b0bc-e3d1-4e4e-8172-bada186b750a\") " Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.195504 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities" (OuterVolumeSpecName: "utilities") pod "8568b0bc-e3d1-4e4e-8172-bada186b750a" (UID: "8568b0bc-e3d1-4e4e-8172-bada186b750a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.203479 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc" (OuterVolumeSpecName: "kube-api-access-84czc") pod "8568b0bc-e3d1-4e4e-8172-bada186b750a" (UID: "8568b0bc-e3d1-4e4e-8172-bada186b750a"). InnerVolumeSpecName "kube-api-access-84czc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.229611 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8568b0bc-e3d1-4e4e-8172-bada186b750a" (UID: "8568b0bc-e3d1-4e4e-8172-bada186b750a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.233432 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.280931 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.280982 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.296472 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.296505 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84czc\" (UniqueName: \"kubernetes.io/projected/8568b0bc-e3d1-4e4e-8172-bada186b750a-kube-api-access-84czc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.296519 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8568b0bc-e3d1-4e4e-8172-bada186b750a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.333112 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.499649 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:02 crc kubenswrapper[4873]: E0219 10:01:02.499918 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:02 crc kubenswrapper[4873]: E0219 10:01:02.499940 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:02 crc kubenswrapper[4873]: E0219 10:01:02.499995 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:04.499979331 +0000 UTC m=+973.789410969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.907334 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" event={"ID":"710b77db-c69e-4428-93f6-7ce8b2c7ee17","Type":"ContainerStarted","Data":"d5213d8f776a516eb0ebc1bff77eadf707410bc2d3c6d133cd538660a60a385d"} Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.909436 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7xl" event={"ID":"8568b0bc-e3d1-4e4e-8172-bada186b750a","Type":"ContainerDied","Data":"a80672d41aa2943f5cef88b806f5ebd8b5afa23daa0fded90e827fd2b0faf463"} Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.909479 4873 scope.go:117] "RemoveContainer" containerID="c66f926167963658837072f9619be2e502f49f78fdee38e1038232ea2b12e0fe" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.909591 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7xl" Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.918233 4873 generic.go:334] "Generic (PLEG): container finished" podID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerID="7b07d5d4936c52da51ddc101b9f0cc93c881b6b0ec5f359e20ad961f5451f0a9" exitCode=0 Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.918306 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerDied","Data":"7b07d5d4936c52da51ddc101b9f0cc93c881b6b0ec5f359e20ad961f5451f0a9"} Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.975346 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.982009 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7xl"] Feb 19 10:01:02 crc kubenswrapper[4873]: I0219 10:01:02.994131 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.493214 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" path="/var/lib/kubelet/pods/8568b0bc-e3d1-4e4e-8172-bada186b750a/volumes" Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.888922 4873 scope.go:117] "RemoveContainer" containerID="8997664180cc4a89bc96536538fcab96c359ac683add2f689bc03e6ec23ed7ec" Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.926558 4873 generic.go:334] "Generic (PLEG): container finished" podID="7d0de876-c87f-4760-b06f-87b8ff7e5588" containerID="556b5cd262209047f1b14207b0a637a7b665f1e1891d8c5b5396469110a2c80f" exitCode=0 Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.926708 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" event={"ID":"7d0de876-c87f-4760-b06f-87b8ff7e5588","Type":"ContainerDied","Data":"556b5cd262209047f1b14207b0a637a7b665f1e1891d8c5b5396469110a2c80f"} Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.933906 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerStarted","Data":"795aba3e60e31b3a457db244680452c8576c3d27a6b31cef59da782c97326eb3"} Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.938631 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-djxfb" event={"ID":"888c3336-cd8a-4bf2-805f-6b473fb272f4","Type":"ContainerStarted","Data":"ce29cfdf182270bd176b751824b9cb006868ef939d551449b7f39e7d292a4e1d"} Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.940430 4873 generic.go:334] "Generic (PLEG): container finished" podID="d4069c7b-b867-4c6b-b5dd-91529a59d01c" containerID="29f5ae38e1bf6410aa018f009d7b5755d54f5baf0b293e53dd34ae617aad0329" exitCode=0 Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.940504 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" event={"ID":"d4069c7b-b867-4c6b-b5dd-91529a59d01c","Type":"ContainerDied","Data":"29f5ae38e1bf6410aa018f009d7b5755d54f5baf0b293e53dd34ae617aad0329"} Feb 19 10:01:03 crc kubenswrapper[4873]: I0219 10:01:03.977066 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hfhg" podStartSLOduration=5.327064622 podStartE2EDuration="15.977050957s" podCreationTimestamp="2026-02-19 10:00:48 +0000 UTC" firstStartedPulling="2026-02-19 10:00:50.86856088 +0000 UTC m=+960.157992518" lastFinishedPulling="2026-02-19 10:01:01.518547195 +0000 UTC m=+970.807978853" observedRunningTime="2026-02-19 10:01:03.976118434 +0000 UTC m=+973.265550062" watchObservedRunningTime="2026-02-19 10:01:03.977050957 +0000 UTC m=+973.266482595" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.079882 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-djxfb" podStartSLOduration=11.079851507 podStartE2EDuration="11.079851507s" podCreationTimestamp="2026-02-19 10:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:04.068513356 +0000 UTC m=+973.357944994" watchObservedRunningTime="2026-02-19 10:01:04.079851507 +0000 UTC m=+973.369283175" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.409335 4873 scope.go:117] "RemoveContainer" containerID="dcfd39b1c5289cd4a8556f216e399630fd4927789a8b353b5566c5719ea3fcee" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.463583 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.556006 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdz7\" (UniqueName: \"kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7\") pod \"7d0de876-c87f-4760-b06f-87b8ff7e5588\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.556291 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:04 crc kubenswrapper[4873]: E0219 10:01:04.556572 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:04 crc kubenswrapper[4873]: E0219 10:01:04.557773 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:04 crc kubenswrapper[4873]: E0219 10:01:04.557861 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:08.557836776 +0000 UTC m=+977.847268464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.563720 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7" (OuterVolumeSpecName: "kube-api-access-mkdz7") pod "7d0de876-c87f-4760-b06f-87b8ff7e5588" (UID: "7d0de876-c87f-4760-b06f-87b8ff7e5588"). InnerVolumeSpecName "kube-api-access-mkdz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.657080 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config\") pod \"7d0de876-c87f-4760-b06f-87b8ff7e5588\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.657219 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc\") pod \"7d0de876-c87f-4760-b06f-87b8ff7e5588\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.657288 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb\") pod \"7d0de876-c87f-4760-b06f-87b8ff7e5588\" (UID: \"7d0de876-c87f-4760-b06f-87b8ff7e5588\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.657703 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdz7\" (UniqueName: \"kubernetes.io/projected/7d0de876-c87f-4760-b06f-87b8ff7e5588-kube-api-access-mkdz7\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.679814 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d0de876-c87f-4760-b06f-87b8ff7e5588" (UID: "7d0de876-c87f-4760-b06f-87b8ff7e5588"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.679894 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config" (OuterVolumeSpecName: "config") pod "7d0de876-c87f-4760-b06f-87b8ff7e5588" (UID: "7d0de876-c87f-4760-b06f-87b8ff7e5588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.685648 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d0de876-c87f-4760-b06f-87b8ff7e5588" (UID: "7d0de876-c87f-4760-b06f-87b8ff7e5588"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.760470 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.761397 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.761427 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.761440 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d0de876-c87f-4760-b06f-87b8ff7e5588-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.862476 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc\") pod \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.862511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config\") pod \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.862680 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb\") pod \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.862745 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb\") pod \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.862794 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sqkb\" (UniqueName: \"kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb\") pod \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\" (UID: \"d4069c7b-b867-4c6b-b5dd-91529a59d01c\") " Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.869702 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb" (OuterVolumeSpecName: "kube-api-access-6sqkb") pod "d4069c7b-b867-4c6b-b5dd-91529a59d01c" (UID: "d4069c7b-b867-4c6b-b5dd-91529a59d01c"). InnerVolumeSpecName "kube-api-access-6sqkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.889626 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4069c7b-b867-4c6b-b5dd-91529a59d01c" (UID: "d4069c7b-b867-4c6b-b5dd-91529a59d01c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.899620 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4069c7b-b867-4c6b-b5dd-91529a59d01c" (UID: "d4069c7b-b867-4c6b-b5dd-91529a59d01c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.899797 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4069c7b-b867-4c6b-b5dd-91529a59d01c" (UID: "d4069c7b-b867-4c6b-b5dd-91529a59d01c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.899986 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config" (OuterVolumeSpecName: "config") pod "d4069c7b-b867-4c6b-b5dd-91529a59d01c" (UID: "d4069c7b-b867-4c6b-b5dd-91529a59d01c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.965288 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sqkb\" (UniqueName: \"kubernetes.io/projected/d4069c7b-b867-4c6b-b5dd-91529a59d01c-kube-api-access-6sqkb\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.965936 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.965952 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.965965 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.965977 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4069c7b-b867-4c6b-b5dd-91529a59d01c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.970768 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" event={"ID":"d4069c7b-b867-4c6b-b5dd-91529a59d01c","Type":"ContainerDied","Data":"be25608adf6f38eb11a1d8fdb4fb6018bdff1784849dbf4c40f2c123fef01c50"} Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.970817 4873 scope.go:117] "RemoveContainer" containerID="29f5ae38e1bf6410aa018f009d7b5755d54f5baf0b293e53dd34ae617aad0329" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.970888 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767bbb56f-v5bpp" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.986607 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" Feb 19 10:01:04 crc kubenswrapper[4873]: I0219 10:01:04.986991 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f8656d65-jtnp6" event={"ID":"7d0de876-c87f-4760-b06f-87b8ff7e5588","Type":"ContainerDied","Data":"27513f9246dbe172eded455fb97103ca558e42e17c318279c0c8d3c8528981cd"} Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.048256 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.068584 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767bbb56f-v5bpp"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.079420 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.085564 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f8656d65-jtnp6"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.239617 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mx6qq"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.332299 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.332523 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sjwbx" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="registry-server" containerID="cri-o://e25bfdf167e531e407faddba4519081a8819c0f481dad1ec82bb12868d58279f" gracePeriod=2 Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.497347 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0de876-c87f-4760-b06f-87b8ff7e5588" path="/var/lib/kubelet/pods/7d0de876-c87f-4760-b06f-87b8ff7e5588/volumes" Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.497837 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4069c7b-b867-4c6b-b5dd-91529a59d01c" path="/var/lib/kubelet/pods/d4069c7b-b867-4c6b-b5dd-91529a59d01c/volumes" Feb 19 10:01:05 crc kubenswrapper[4873]: I0219 10:01:05.801044 4873 scope.go:117] "RemoveContainer" containerID="556b5cd262209047f1b14207b0a637a7b665f1e1891d8c5b5396469110a2c80f" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.023986 4873 generic.go:334] "Generic (PLEG): container finished" podID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerID="e25bfdf167e531e407faddba4519081a8819c0f481dad1ec82bb12868d58279f" exitCode=0 Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.024089 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerDied","Data":"e25bfdf167e531e407faddba4519081a8819c0f481dad1ec82bb12868d58279f"} Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.043037 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerStarted","Data":"a571f36e40c7ad937de6c80c4fe2960d3a56ca51d705e8212a16e2333ee169bb"} Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.051523 4873 generic.go:334] "Generic (PLEG): container finished" podID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerID="84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4" exitCode=0 Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.051610 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" event={"ID":"710b77db-c69e-4428-93f6-7ce8b2c7ee17","Type":"ContainerDied","Data":"84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4"} Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.067963 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mx6qq" event={"ID":"91fbca18-847d-4e7b-8a40-e52dd348d155","Type":"ContainerStarted","Data":"a4cfa9b70a12969456d4389138494314fd5620b64ab7eefa7cbad4c7f8f20a88"} Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.074383 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmkgp" podStartSLOduration=14.010381125 podStartE2EDuration="29.074361499s" podCreationTimestamp="2026-02-19 10:00:37 +0000 UTC" firstStartedPulling="2026-02-19 10:00:49.731416566 +0000 UTC m=+959.020848204" lastFinishedPulling="2026-02-19 10:01:04.79539695 +0000 UTC m=+974.084828578" observedRunningTime="2026-02-19 10:01:06.068180285 +0000 UTC m=+975.357611923" watchObservedRunningTime="2026-02-19 10:01:06.074361499 +0000 UTC m=+975.363793137" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.252234 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.324187 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content\") pod \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.324490 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities\") pod \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.324557 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v2hw\" (UniqueName: \"kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw\") pod \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\" (UID: \"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e\") " Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.325523 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities" (OuterVolumeSpecName: "utilities") pod "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" (UID: "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.328692 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw" (OuterVolumeSpecName: "kube-api-access-4v2hw") pod "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" (UID: "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e"). InnerVolumeSpecName "kube-api-access-4v2hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.390485 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" (UID: "dd6b83dc-5d8c-48f7-9e5e-9373c786f31e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.427217 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.427259 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:06 crc kubenswrapper[4873]: I0219 10:01:06.427270 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v2hw\" (UniqueName: \"kubernetes.io/projected/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e-kube-api-access-4v2hw\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.084712 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" event={"ID":"710b77db-c69e-4428-93f6-7ce8b2c7ee17","Type":"ContainerStarted","Data":"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97"} Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.085273 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.089016 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd6df8e5-8bc5-4bd5-b466-a90642932cc2","Type":"ContainerStarted","Data":"0c596bb1b79046534852c1d9e3299a9e12cd3769e29292507fc91e6583bf59a7"} Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.089062 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd6df8e5-8bc5-4bd5-b466-a90642932cc2","Type":"ContainerStarted","Data":"8e00278cfa2023995186da21c0a1d3e11321b27b525e575a4a20a1bd8589a082"} Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.089571 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.091293 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sjwbx" event={"ID":"dd6b83dc-5d8c-48f7-9e5e-9373c786f31e","Type":"ContainerDied","Data":"6ceab5ac0f605f81c09fcddc9dd0cf16ee123c3875c132c11dd895633c1f969a"} Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.091335 4873 scope.go:117] "RemoveContainer" containerID="e25bfdf167e531e407faddba4519081a8819c0f481dad1ec82bb12868d58279f" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.091434 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sjwbx" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.096141 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5224ec80-b354-467f-b660-2d22b9725be0","Type":"ContainerStarted","Data":"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1"} Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.096570 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.111707 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" podStartSLOduration=8.111690194 podStartE2EDuration="8.111690194s" podCreationTimestamp="2026-02-19 10:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:07.10627477 +0000 UTC m=+976.395706398" watchObservedRunningTime="2026-02-19 10:01:07.111690194 +0000 UTC m=+976.401121832" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.149209 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.836120793 podStartE2EDuration="13.149175564s" podCreationTimestamp="2026-02-19 10:00:54 +0000 UTC" firstStartedPulling="2026-02-19 10:00:59.53285835 +0000 UTC m=+968.822289988" lastFinishedPulling="2026-02-19 10:01:05.845913121 +0000 UTC m=+975.135344759" observedRunningTime="2026-02-19 10:01:07.144339374 +0000 UTC m=+976.433771032" watchObservedRunningTime="2026-02-19 10:01:07.149175564 +0000 UTC m=+976.438607202" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.160809 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.51370348 podStartE2EDuration="48.160793752s" podCreationTimestamp="2026-02-19 10:00:19 +0000 UTC" firstStartedPulling="2026-02-19 10:00:28.160412476 +0000 UTC m=+937.449844114" lastFinishedPulling="2026-02-19 10:01:05.807502738 +0000 UTC m=+975.096934386" observedRunningTime="2026-02-19 10:01:07.159151512 +0000 UTC m=+976.448583170" watchObservedRunningTime="2026-02-19 10:01:07.160793752 +0000 UTC m=+976.450225390" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.178637 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.184959 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sjwbx"] Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.494160 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" path="/var/lib/kubelet/pods/dd6b83dc-5d8c-48f7-9e5e-9373c786f31e/volumes" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.903172 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:01:07 crc kubenswrapper[4873]: I0219 10:01:07.903242 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:01:08 crc kubenswrapper[4873]: I0219 10:01:08.567165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:08 crc kubenswrapper[4873]: E0219 10:01:08.567353 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:08 crc kubenswrapper[4873]: E0219 10:01:08.567534 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:08 crc kubenswrapper[4873]: E0219 10:01:08.567589 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:16.567574254 +0000 UTC m=+985.857005882 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:08 crc kubenswrapper[4873]: I0219 10:01:08.745231 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:08 crc kubenswrapper[4873]: I0219 10:01:08.745382 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:08 crc kubenswrapper[4873]: I0219 10:01:08.805502 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:08 crc kubenswrapper[4873]: I0219 10:01:08.945089 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:08 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:08 crc kubenswrapper[4873]: > Feb 19 10:01:09 crc kubenswrapper[4873]: I0219 10:01:09.094031 4873 scope.go:117] "RemoveContainer" containerID="26c7c425cb063e2664738a405a0bc48114123eb086c08ef61f83247d0e893cd3" Feb 19 10:01:09 crc kubenswrapper[4873]: I0219 10:01:09.130505 4873 scope.go:117] "RemoveContainer" containerID="94b88ff2b105134857189f15d457bd06c0d3247317f372402ca302f0541cc41d" Feb 19 10:01:09 crc kubenswrapper[4873]: I0219 10:01:09.198323 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:10 crc kubenswrapper[4873]: I0219 10:01:10.160912 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mx6qq" event={"ID":"91fbca18-847d-4e7b-8a40-e52dd348d155","Type":"ContainerStarted","Data":"9dbc805ae65f008dd93910bebe60ab94566a15abae9fcb90aa3ea07d5a696df2"} Feb 19 10:01:10 crc kubenswrapper[4873]: I0219 10:01:10.182214 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mx6qq" podStartSLOduration=5.842911275 podStartE2EDuration="9.1821998s" podCreationTimestamp="2026-02-19 10:01:01 +0000 UTC" firstStartedPulling="2026-02-19 10:01:05.84184682 +0000 UTC m=+975.131278458" lastFinishedPulling="2026-02-19 10:01:09.181135345 +0000 UTC m=+978.470566983" observedRunningTime="2026-02-19 10:01:10.179988505 +0000 UTC m=+979.469420143" watchObservedRunningTime="2026-02-19 10:01:10.1821998 +0000 UTC m=+979.471631438" Feb 19 10:01:11 crc kubenswrapper[4873]: I0219 10:01:11.171083 4873 generic.go:334] "Generic (PLEG): container finished" podID="e3385c22-baa0-4261-b498-6a09c8768520" containerID="6e3e72cbe1386e2212554c075c732c997cd9eca5e8e80b367a0a82589ed0ceb4" exitCode=0 Feb 19 10:01:11 crc kubenswrapper[4873]: I0219 10:01:11.171152 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3385c22-baa0-4261-b498-6a09c8768520","Type":"ContainerDied","Data":"6e3e72cbe1386e2212554c075c732c997cd9eca5e8e80b367a0a82589ed0ceb4"} Feb 19 10:01:11 crc kubenswrapper[4873]: I0219 10:01:11.175206 4873 generic.go:334] "Generic (PLEG): container finished" podID="f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964" containerID="f86fddfaec0875e79580c640fa9abc158961398e84a30e8571f5d4f75941dd57" exitCode=0 Feb 19 10:01:11 crc kubenswrapper[4873]: I0219 10:01:11.175241 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964","Type":"ContainerDied","Data":"f86fddfaec0875e79580c640fa9abc158961398e84a30e8571f5d4f75941dd57"} Feb 19 10:01:11 crc kubenswrapper[4873]: I0219 10:01:11.724790 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:01:12 crc kubenswrapper[4873]: I0219 10:01:12.191905 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e3385c22-baa0-4261-b498-6a09c8768520","Type":"ContainerStarted","Data":"c0ff139ab7b9c15c5ce4902e4dba259d9eab3e681a00c3431e03eaf2cbeb34a2"} Feb 19 10:01:12 crc kubenswrapper[4873]: I0219 10:01:12.195655 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964","Type":"ContainerStarted","Data":"1e830e7fa894ecd68ee7b302c102fb509a703a04c9a8887c6a5ab35b125981fe"} Feb 19 10:01:12 crc kubenswrapper[4873]: I0219 10:01:12.196008 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hfhg" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="registry-server" containerID="cri-o://795aba3e60e31b3a457db244680452c8576c3d27a6b31cef59da782c97326eb3" gracePeriod=2 Feb 19 10:01:12 crc kubenswrapper[4873]: I0219 10:01:12.221736 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=49.308784756 podStartE2EDuration="57.221692059s" podCreationTimestamp="2026-02-19 10:00:15 +0000 UTC" firstStartedPulling="2026-02-19 10:00:28.160924038 +0000 UTC m=+937.450355676" lastFinishedPulling="2026-02-19 10:00:36.073831341 +0000 UTC m=+945.363262979" observedRunningTime="2026-02-19 10:01:12.215415353 +0000 UTC m=+981.504847001" watchObservedRunningTime="2026-02-19 10:01:12.221692059 +0000 UTC m=+981.511123697" Feb 19 10:01:12 crc kubenswrapper[4873]: I0219 10:01:12.244149 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=49.478754334 podStartE2EDuration="58.244129735s" podCreationTimestamp="2026-02-19 10:00:14 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.636275009 +0000 UTC m=+936.925706647" lastFinishedPulling="2026-02-19 10:00:36.40165041 +0000 UTC m=+945.691082048" observedRunningTime="2026-02-19 10:01:12.235312207 +0000 UTC m=+981.524743865" watchObservedRunningTime="2026-02-19 10:01:12.244129735 +0000 UTC m=+981.533561373" Feb 19 10:01:13 crc kubenswrapper[4873]: I0219 10:01:13.205970 4873 generic.go:334] "Generic (PLEG): container finished" podID="f20bcc70-bf30-4949-951a-b36d083d205f" containerID="795aba3e60e31b3a457db244680452c8576c3d27a6b31cef59da782c97326eb3" exitCode=0 Feb 19 10:01:13 crc kubenswrapper[4873]: I0219 10:01:13.206635 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerDied","Data":"795aba3e60e31b3a457db244680452c8576c3d27a6b31cef59da782c97326eb3"} Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.008868 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.183845 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities\") pod \"f20bcc70-bf30-4949-951a-b36d083d205f\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.183990 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-772f4\" (UniqueName: \"kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4\") pod \"f20bcc70-bf30-4949-951a-b36d083d205f\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.184046 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content\") pod \"f20bcc70-bf30-4949-951a-b36d083d205f\" (UID: \"f20bcc70-bf30-4949-951a-b36d083d205f\") " Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.185331 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities" (OuterVolumeSpecName: "utilities") pod "f20bcc70-bf30-4949-951a-b36d083d205f" (UID: "f20bcc70-bf30-4949-951a-b36d083d205f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.192283 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4" (OuterVolumeSpecName: "kube-api-access-772f4") pod "f20bcc70-bf30-4949-951a-b36d083d205f" (UID: "f20bcc70-bf30-4949-951a-b36d083d205f"). InnerVolumeSpecName "kube-api-access-772f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.219951 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hfhg" event={"ID":"f20bcc70-bf30-4949-951a-b36d083d205f","Type":"ContainerDied","Data":"d368a20b1ccdc9da62de20626c013e559624ef41b56690236309ce9d1a2a14ac"} Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.219962 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hfhg" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.221197 4873 scope.go:117] "RemoveContainer" containerID="795aba3e60e31b3a457db244680452c8576c3d27a6b31cef59da782c97326eb3" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.222523 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerStarted","Data":"8c1b23b8ffccae7306c11a4c3d6415747d09b3586d46bdcc68e728a0936c32a0"} Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.245300 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f20bcc70-bf30-4949-951a-b36d083d205f" (UID: "f20bcc70-bf30-4949-951a-b36d083d205f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.245991 4873 scope.go:117] "RemoveContainer" containerID="35d25dc52ac46707a9ed600e3f35ef54061a263d12693fd1d59a80b0fdce1fe0" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.270256 4873 scope.go:117] "RemoveContainer" containerID="eba7f3213f6b8b4ed8b7cfa357b6cb425beb34251cc11a43d090e87e4e2033e9" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.286638 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.286672 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-772f4\" (UniqueName: \"kubernetes.io/projected/f20bcc70-bf30-4949-951a-b36d083d205f-kube-api-access-772f4\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.286684 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f20bcc70-bf30-4949-951a-b36d083d205f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.560709 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:01:14 crc kubenswrapper[4873]: I0219 10:01:14.567089 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hfhg"] Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.058412 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.118041 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.119603 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="dnsmasq-dns" containerID="cri-o://7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750" gracePeriod=10 Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.499354 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" path="/var/lib/kubelet/pods/f20bcc70-bf30-4949-951a-b36d083d205f/volumes" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.676071 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.760278 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.760355 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.810236 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc\") pod \"d736e93a-6a36-458e-a8f4-a9d511530043\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.810616 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5m2m\" (UniqueName: \"kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m\") pod \"d736e93a-6a36-458e-a8f4-a9d511530043\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.810843 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config\") pod \"d736e93a-6a36-458e-a8f4-a9d511530043\" (UID: \"d736e93a-6a36-458e-a8f4-a9d511530043\") " Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.817976 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m" (OuterVolumeSpecName: "kube-api-access-s5m2m") pod "d736e93a-6a36-458e-a8f4-a9d511530043" (UID: "d736e93a-6a36-458e-a8f4-a9d511530043"). InnerVolumeSpecName "kube-api-access-s5m2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.857173 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d736e93a-6a36-458e-a8f4-a9d511530043" (UID: "d736e93a-6a36-458e-a8f4-a9d511530043"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.857858 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config" (OuterVolumeSpecName: "config") pod "d736e93a-6a36-458e-a8f4-a9d511530043" (UID: "d736e93a-6a36-458e-a8f4-a9d511530043"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.913396 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5m2m\" (UniqueName: \"kubernetes.io/projected/d736e93a-6a36-458e-a8f4-a9d511530043-kube-api-access-s5m2m\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.913428 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:15 crc kubenswrapper[4873]: I0219 10:01:15.913438 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d736e93a-6a36-458e-a8f4-a9d511530043-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.256306 4873 generic.go:334] "Generic (PLEG): container finished" podID="d736e93a-6a36-458e-a8f4-a9d511530043" containerID="7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750" exitCode=0 Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.256675 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" event={"ID":"d736e93a-6a36-458e-a8f4-a9d511530043","Type":"ContainerDied","Data":"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750"} Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.256710 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" event={"ID":"d736e93a-6a36-458e-a8f4-a9d511530043","Type":"ContainerDied","Data":"9fb66c25e5ae993349fe01e14d2ddec63fb2fa9c9f8fdca8f66724a42a087dcf"} Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.256769 4873 scope.go:117] "RemoveContainer" containerID="7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.257003 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58ff7f48c5-nqbz4" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.299444 4873 scope.go:117] "RemoveContainer" containerID="11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.359176 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.375719 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58ff7f48c5-nqbz4"] Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.396670 4873 scope.go:117] "RemoveContainer" containerID="7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750" Feb 19 10:01:16 crc kubenswrapper[4873]: E0219 10:01:16.400225 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750\": container with ID starting with 7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750 not found: ID does not exist" containerID="7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.400277 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750"} err="failed to get container status \"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750\": rpc error: code = NotFound desc = could not find container \"7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750\": container with ID starting with 7de987cb52aa2fd800cbb7f97c53c2a6500299ae865b3cb3f8039ab8618c7750 not found: ID does not exist" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.400304 4873 scope.go:117] "RemoveContainer" containerID="11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3" Feb 19 10:01:16 crc kubenswrapper[4873]: E0219 10:01:16.404241 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3\": container with ID starting with 11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3 not found: ID does not exist" containerID="11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.404281 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3"} err="failed to get container status \"11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3\": rpc error: code = NotFound desc = could not find container \"11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3\": container with ID starting with 11d98b54d3d242cf39469162eef86399c7ac915ea837018032d2240dd76c8ed3 not found: ID does not exist" Feb 19 10:01:16 crc kubenswrapper[4873]: I0219 10:01:16.626854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:16 crc kubenswrapper[4873]: E0219 10:01:16.627701 4873 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:16 crc kubenswrapper[4873]: E0219 10:01:16.627738 4873 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:16 crc kubenswrapper[4873]: E0219 10:01:16.627820 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift podName:c3b21a02-7162-42ca-84cf-e0fa36b04a22 nodeName:}" failed. No retries permitted until 2026-02-19 10:01:32.62779836 +0000 UTC m=+1001.917230068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift") pod "swift-storage-0" (UID: "c3b21a02-7162-42ca-84cf-e0fa36b04a22") : configmap "swift-ring-files" not found Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.130142 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.131077 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.266653 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.270956 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerStarted","Data":"25a26e338e379f85c4c91e347569e9af0c97d68170a522acb020aee8d309e23c"} Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.375605 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.498270 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" path="/var/lib/kubelet/pods/d736e93a-6a36-458e-a8f4-a9d511530043/volumes" Feb 19 10:01:17 crc kubenswrapper[4873]: I0219 10:01:17.823197 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vsnt5" podUID="b0ab9d21-0c11-4940-ad43-3e20c46012ad" containerName="ovn-controller" probeResult="failure" output=< Feb 19 10:01:17 crc kubenswrapper[4873]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 10:01:17 crc kubenswrapper[4873]: > Feb 19 10:01:18 crc kubenswrapper[4873]: I0219 10:01:18.281278 4873 generic.go:334] "Generic (PLEG): container finished" podID="91fbca18-847d-4e7b-8a40-e52dd348d155" containerID="9dbc805ae65f008dd93910bebe60ab94566a15abae9fcb90aa3ea07d5a696df2" exitCode=0 Feb 19 10:01:18 crc kubenswrapper[4873]: I0219 10:01:18.281373 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mx6qq" event={"ID":"91fbca18-847d-4e7b-8a40-e52dd348d155","Type":"ContainerDied","Data":"9dbc805ae65f008dd93910bebe60ab94566a15abae9fcb90aa3ea07d5a696df2"} Feb 19 10:01:18 crc kubenswrapper[4873]: I0219 10:01:18.959975 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:18 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:18 crc kubenswrapper[4873]: > Feb 19 10:01:19 crc kubenswrapper[4873]: I0219 10:01:19.593314 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:01:19 crc kubenswrapper[4873]: I0219 10:01:19.863666 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.015401 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.240213 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.302534 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mx6qq" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.302534 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mx6qq" event={"ID":"91fbca18-847d-4e7b-8a40-e52dd348d155","Type":"ContainerDied","Data":"a4cfa9b70a12969456d4389138494314fd5620b64ab7eefa7cbad4c7f8f20a88"} Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.302684 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4cfa9b70a12969456d4389138494314fd5620b64ab7eefa7cbad4c7f8f20a88" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.304870 4873 generic.go:334] "Generic (PLEG): container finished" podID="da89f0ff-c51c-4c4a-8df4-f7787d29ddd2" containerID="57567e0456fd2d45349518902ef81c44525ff9ae50ef5a0fa8ed4d2a66526532" exitCode=0 Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.304958 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2","Type":"ContainerDied","Data":"57567e0456fd2d45349518902ef81c44525ff9ae50ef5a0fa8ed4d2a66526532"} Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.309250 4873 generic.go:334] "Generic (PLEG): container finished" podID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerID="190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f" exitCode=0 Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.309295 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerDied","Data":"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f"} Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.312299 4873 generic.go:334] "Generic (PLEG): container finished" podID="86685946-19ac-434a-974f-99b5beeda172" containerID="853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb" exitCode=0 Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.312350 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerDied","Data":"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb"} Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322275 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322323 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322352 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322375 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322396 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322498 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.322536 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5klgc\" (UniqueName: \"kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc\") pod \"91fbca18-847d-4e7b-8a40-e52dd348d155\" (UID: \"91fbca18-847d-4e7b-8a40-e52dd348d155\") " Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.324118 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.324946 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.329765 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc" (OuterVolumeSpecName: "kube-api-access-5klgc") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "kube-api-access-5klgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.332592 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.355852 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.376711 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts" (OuterVolumeSpecName: "scripts") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.380590 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "91fbca18-847d-4e7b-8a40-e52dd348d155" (UID: "91fbca18-847d-4e7b-8a40-e52dd348d155"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.424624 4873 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.424890 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5klgc\" (UniqueName: \"kubernetes.io/projected/91fbca18-847d-4e7b-8a40-e52dd348d155-kube-api-access-5klgc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.424985 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.425073 4873 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/91fbca18-847d-4e7b-8a40-e52dd348d155-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.425214 4873 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/91fbca18-847d-4e7b-8a40-e52dd348d155-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.426179 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:20 crc kubenswrapper[4873]: I0219 10:01:20.426280 4873 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/91fbca18-847d-4e7b-8a40-e52dd348d155-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.323075 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerStarted","Data":"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72"} Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.324132 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.325833 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da89f0ff-c51c-4c4a-8df4-f7787d29ddd2","Type":"ContainerStarted","Data":"f2db44adca6cf6604ba2d059438ee7864c06927d8a0ee8e55a2e26392bb8b834"} Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.326037 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.328977 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerStarted","Data":"7feaa15c2585dfd6c2a4ea45a4afeb2729895aad024065a30a27359792d333ac"} Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.330904 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerStarted","Data":"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9"} Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.331171 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.351325 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=61.276451502 podStartE2EDuration="1m9.351307936s" podCreationTimestamp="2026-02-19 10:00:12 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.999327016 +0000 UTC m=+937.288758654" lastFinishedPulling="2026-02-19 10:00:36.07418344 +0000 UTC m=+945.363615088" observedRunningTime="2026-02-19 10:01:21.349793878 +0000 UTC m=+990.639225516" watchObservedRunningTime="2026-02-19 10:01:21.351307936 +0000 UTC m=+990.640739574" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.390088 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.390858734 podStartE2EDuration="1m2.390068657s" podCreationTimestamp="2026-02-19 10:00:19 +0000 UTC" firstStartedPulling="2026-02-19 10:00:28.386744923 +0000 UTC m=+937.676176561" lastFinishedPulling="2026-02-19 10:01:20.385954846 +0000 UTC m=+989.675386484" observedRunningTime="2026-02-19 10:01:21.385686349 +0000 UTC m=+990.675118007" watchObservedRunningTime="2026-02-19 10:01:21.390068657 +0000 UTC m=+990.679500295" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.427443 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.340230855 podStartE2EDuration="1m9.427422664s" podCreationTimestamp="2026-02-19 10:00:12 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.036805875 +0000 UTC m=+936.326237513" lastFinishedPulling="2026-02-19 10:00:35.123997684 +0000 UTC m=+944.413429322" observedRunningTime="2026-02-19 10:01:21.420058621 +0000 UTC m=+990.709490279" watchObservedRunningTime="2026-02-19 10:01:21.427422664 +0000 UTC m=+990.716854312" Feb 19 10:01:21 crc kubenswrapper[4873]: I0219 10:01:21.447628 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=61.040081322 podStartE2EDuration="1m9.447602195s" podCreationTimestamp="2026-02-19 10:00:12 +0000 UTC" firstStartedPulling="2026-02-19 10:00:27.66757226 +0000 UTC m=+936.957003898" lastFinishedPulling="2026-02-19 10:00:36.075093133 +0000 UTC m=+945.364524771" observedRunningTime="2026-02-19 10:01:21.442643462 +0000 UTC m=+990.732075100" watchObservedRunningTime="2026-02-19 10:01:21.447602195 +0000 UTC m=+990.737033833" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.487294 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c73a-account-create-update-zxxrn"] Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488209 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488280 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488356 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0de876-c87f-4760-b06f-87b8ff7e5588" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488418 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0de876-c87f-4760-b06f-87b8ff7e5588" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488480 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488537 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488597 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fbca18-847d-4e7b-8a40-e52dd348d155" containerName="swift-ring-rebalance" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488648 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fbca18-847d-4e7b-8a40-e52dd348d155" containerName="swift-ring-rebalance" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488705 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488763 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488830 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="dnsmasq-dns" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.488905 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="dnsmasq-dns" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.488986 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489048 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489135 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489197 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489267 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4069c7b-b867-4c6b-b5dd-91529a59d01c" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489318 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4069c7b-b867-4c6b-b5dd-91529a59d01c" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489382 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489446 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489506 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489560 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="extract-content" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489626 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489703 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="extract-utilities" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489798 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489858 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: E0219 10:01:22.489931 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.489985 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490214 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fbca18-847d-4e7b-8a40-e52dd348d155" containerName="swift-ring-rebalance" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490287 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8568b0bc-e3d1-4e4e-8172-bada186b750a" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490347 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d736e93a-6a36-458e-a8f4-a9d511530043" containerName="dnsmasq-dns" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490408 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20bcc70-bf30-4949-951a-b36d083d205f" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490468 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0de876-c87f-4760-b06f-87b8ff7e5588" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490530 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6b83dc-5d8c-48f7-9e5e-9373c786f31e" containerName="registry-server" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.490595 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4069c7b-b867-4c6b-b5dd-91529a59d01c" containerName="init" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.492842 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.500533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.504455 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c73a-account-create-update-zxxrn"] Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.536539 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p55tt"] Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.537522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.553954 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p55tt"] Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.664211 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.664282 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.664317 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbm47\" (UniqueName: \"kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.664353 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hvh\" (UniqueName: \"kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.766024 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbm47\" (UniqueName: \"kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.766459 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6hvh\" (UniqueName: \"kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.766703 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.767334 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.767394 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.767929 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.784598 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbm47\" (UniqueName: \"kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47\") pod \"glance-db-create-p55tt\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " pod="openstack/glance-db-create-p55tt" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.791646 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6hvh\" (UniqueName: \"kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh\") pod \"glance-c73a-account-create-update-zxxrn\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.811951 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vsnt5" podUID="b0ab9d21-0c11-4940-ad43-3e20c46012ad" containerName="ovn-controller" probeResult="failure" output=< Feb 19 10:01:22 crc kubenswrapper[4873]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 10:01:22 crc kubenswrapper[4873]: > Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.816092 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.841220 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.841272 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-t5bgp" Feb 19 10:01:22 crc kubenswrapper[4873]: I0219 10:01:22.856862 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p55tt" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.061463 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vsnt5-config-mj2kr"] Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.063629 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.069925 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vsnt5-config-mj2kr"] Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.069949 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.165150 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p55tt"] Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175144 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175288 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175331 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175385 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbdg\" (UniqueName: \"kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175409 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.175483 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.276845 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.276943 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.276973 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.276993 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbdg\" (UniqueName: \"kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277012 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277057 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277361 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277410 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277519 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.277842 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.279319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.298160 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbdg\" (UniqueName: \"kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg\") pod \"ovn-controller-vsnt5-config-mj2kr\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.358052 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p55tt" event={"ID":"d81f72af-8420-4334-811e-f0e0cc1c7731","Type":"ContainerStarted","Data":"1dee66111cb00be8e1d7b10f7d4c7537cb2d5b855ce8c1d8f116d504503e7207"} Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.390115 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.434250 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c73a-account-create-update-zxxrn"] Feb 19 10:01:23 crc kubenswrapper[4873]: W0219 10:01:23.440825 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd0e3e74_f1aa_4b5f_a2ae_b89f90644f88.slice/crio-737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f WatchSource:0}: Error finding container 737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f: Status 404 returned error can't find the container with id 737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f Feb 19 10:01:23 crc kubenswrapper[4873]: I0219 10:01:23.867660 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vsnt5-config-mj2kr"] Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.340945 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d9nmp"] Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.342596 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.344853 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.350427 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9nmp"] Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.371803 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vsnt5-config-mj2kr" event={"ID":"b6410809-f775-4bf8-bc41-63f159854e76","Type":"ContainerStarted","Data":"0abe3568e1e754df8609a1e142601381c885a8173a04bd7acf2a44dd4c765ac5"} Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.372725 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c73a-account-create-update-zxxrn" event={"ID":"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88","Type":"ContainerStarted","Data":"737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f"} Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.482141 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.495970 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqt7l\" (UniqueName: \"kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.496027 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.597917 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqt7l\" (UniqueName: \"kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.598003 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.599701 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.638943 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqt7l\" (UniqueName: \"kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l\") pod \"root-account-create-update-d9nmp\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:24 crc kubenswrapper[4873]: I0219 10:01:24.656427 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.136937 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9nmp"] Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.381125 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p55tt" event={"ID":"d81f72af-8420-4334-811e-f0e0cc1c7731","Type":"ContainerStarted","Data":"7cdae30c7bd3b7746068a98e9abd29f90395d96b144f2365d0ebb1da465756e0"} Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.382030 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9nmp" event={"ID":"ad0c7de4-412a-4e8c-90d0-817151c8a015","Type":"ContainerStarted","Data":"b1bc87ff69bfb1fa797a7dfd6ebfbe51b81cc642abb258a9974b216355561af2"} Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.383414 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c73a-account-create-update-zxxrn" event={"ID":"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88","Type":"ContainerStarted","Data":"5b7d076de566fc4d5772c9116560231f2acaebd0aad62281f0cb88965b142cc2"} Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.403265 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-p55tt" podStartSLOduration=3.40324313 podStartE2EDuration="3.40324313s" podCreationTimestamp="2026-02-19 10:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:25.398882352 +0000 UTC m=+994.688313990" watchObservedRunningTime="2026-02-19 10:01:25.40324313 +0000 UTC m=+994.692674768" Feb 19 10:01:25 crc kubenswrapper[4873]: I0219 10:01:25.863966 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.393176 4873 generic.go:334] "Generic (PLEG): container finished" podID="ad0c7de4-412a-4e8c-90d0-817151c8a015" containerID="2f0ffc7ea2219fb39042b2ae636be2bc871ede3a5af5f5056178cf8abfebcb4d" exitCode=0 Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.393245 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9nmp" event={"ID":"ad0c7de4-412a-4e8c-90d0-817151c8a015","Type":"ContainerDied","Data":"2f0ffc7ea2219fb39042b2ae636be2bc871ede3a5af5f5056178cf8abfebcb4d"} Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.395334 4873 generic.go:334] "Generic (PLEG): container finished" podID="fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" containerID="5b7d076de566fc4d5772c9116560231f2acaebd0aad62281f0cb88965b142cc2" exitCode=0 Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.395406 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c73a-account-create-update-zxxrn" event={"ID":"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88","Type":"ContainerDied","Data":"5b7d076de566fc4d5772c9116560231f2acaebd0aad62281f0cb88965b142cc2"} Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.400022 4873 generic.go:334] "Generic (PLEG): container finished" podID="d81f72af-8420-4334-811e-f0e0cc1c7731" containerID="7cdae30c7bd3b7746068a98e9abd29f90395d96b144f2365d0ebb1da465756e0" exitCode=0 Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.400090 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p55tt" event={"ID":"d81f72af-8420-4334-811e-f0e0cc1c7731","Type":"ContainerDied","Data":"7cdae30c7bd3b7746068a98e9abd29f90395d96b144f2365d0ebb1da465756e0"} Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.402180 4873 generic.go:334] "Generic (PLEG): container finished" podID="b6410809-f775-4bf8-bc41-63f159854e76" containerID="300c17fe87cdc74fea5cc1a915ff92db53e3c3a4eee6ced7352b06833035dffb" exitCode=0 Feb 19 10:01:26 crc kubenswrapper[4873]: I0219 10:01:26.402233 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vsnt5-config-mj2kr" event={"ID":"b6410809-f775-4bf8-bc41-63f159854e76","Type":"ContainerDied","Data":"300c17fe87cdc74fea5cc1a915ff92db53e3c3a4eee6ced7352b06833035dffb"} Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.828792 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.837600 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vsnt5" Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.957769 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6hvh\" (UniqueName: \"kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh\") pod \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.957826 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts\") pod \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\" (UID: \"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88\") " Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.959900 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" (UID: "fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:27 crc kubenswrapper[4873]: I0219 10:01:27.966209 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh" (OuterVolumeSpecName: "kube-api-access-h6hvh") pod "fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" (UID: "fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88"). InnerVolumeSpecName "kube-api-access-h6hvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.042144 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.049972 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.059953 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p55tt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.060021 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6hvh\" (UniqueName: \"kubernetes.io/projected/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-kube-api-access-h6hvh\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.060043 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160804 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqt7l\" (UniqueName: \"kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l\") pod \"ad0c7de4-412a-4e8c-90d0-817151c8a015\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160840 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160887 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160912 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160902 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160945 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts\") pod \"ad0c7de4-412a-4e8c-90d0-817151c8a015\" (UID: \"ad0c7de4-412a-4e8c-90d0-817151c8a015\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160970 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161019 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbm47\" (UniqueName: \"kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47\") pod \"d81f72af-8420-4334-811e-f0e0cc1c7731\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161046 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsbdg\" (UniqueName: \"kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg\") pod \"b6410809-f775-4bf8-bc41-63f159854e76\" (UID: \"b6410809-f775-4bf8-bc41-63f159854e76\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161129 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts\") pod \"d81f72af-8420-4334-811e-f0e0cc1c7731\" (UID: \"d81f72af-8420-4334-811e-f0e0cc1c7731\") " Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161659 4873 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.160968 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run" (OuterVolumeSpecName: "var-run") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161437 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.161906 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.162114 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts" (OuterVolumeSpecName: "scripts") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.162242 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad0c7de4-412a-4e8c-90d0-817151c8a015" (UID: "ad0c7de4-412a-4e8c-90d0-817151c8a015"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.162251 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d81f72af-8420-4334-811e-f0e0cc1c7731" (UID: "d81f72af-8420-4334-811e-f0e0cc1c7731"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.163709 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l" (OuterVolumeSpecName: "kube-api-access-pqt7l") pod "ad0c7de4-412a-4e8c-90d0-817151c8a015" (UID: "ad0c7de4-412a-4e8c-90d0-817151c8a015"). InnerVolumeSpecName "kube-api-access-pqt7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.166582 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg" (OuterVolumeSpecName: "kube-api-access-lsbdg") pod "b6410809-f775-4bf8-bc41-63f159854e76" (UID: "b6410809-f775-4bf8-bc41-63f159854e76"). InnerVolumeSpecName "kube-api-access-lsbdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.169056 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47" (OuterVolumeSpecName: "kube-api-access-fbm47") pod "d81f72af-8420-4334-811e-f0e0cc1c7731" (UID: "d81f72af-8420-4334-811e-f0e0cc1c7731"). InnerVolumeSpecName "kube-api-access-fbm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262607 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqt7l\" (UniqueName: \"kubernetes.io/projected/ad0c7de4-412a-4e8c-90d0-817151c8a015-kube-api-access-pqt7l\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262637 4873 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262646 4873 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262656 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6410809-f775-4bf8-bc41-63f159854e76-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262666 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0c7de4-412a-4e8c-90d0-817151c8a015-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262674 4873 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b6410809-f775-4bf8-bc41-63f159854e76-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262684 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbm47\" (UniqueName: \"kubernetes.io/projected/d81f72af-8420-4334-811e-f0e0cc1c7731-kube-api-access-fbm47\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262693 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsbdg\" (UniqueName: \"kubernetes.io/projected/b6410809-f775-4bf8-bc41-63f159854e76-kube-api-access-lsbdg\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.262702 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81f72af-8420-4334-811e-f0e0cc1c7731-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302394 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-r4fbt"] Feb 19 10:01:28 crc kubenswrapper[4873]: E0219 10:01:28.302739 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0c7de4-412a-4e8c-90d0-817151c8a015" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302755 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0c7de4-412a-4e8c-90d0-817151c8a015" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: E0219 10:01:28.302777 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81f72af-8420-4334-811e-f0e0cc1c7731" containerName="mariadb-database-create" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302784 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81f72af-8420-4334-811e-f0e0cc1c7731" containerName="mariadb-database-create" Feb 19 10:01:28 crc kubenswrapper[4873]: E0219 10:01:28.302793 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6410809-f775-4bf8-bc41-63f159854e76" containerName="ovn-config" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302799 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6410809-f775-4bf8-bc41-63f159854e76" containerName="ovn-config" Feb 19 10:01:28 crc kubenswrapper[4873]: E0219 10:01:28.302807 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302814 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302960 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0c7de4-412a-4e8c-90d0-817151c8a015" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302975 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6410809-f775-4bf8-bc41-63f159854e76" containerName="ovn-config" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302984 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" containerName="mariadb-account-create-update" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.302993 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81f72af-8420-4334-811e-f0e0cc1c7731" containerName="mariadb-database-create" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.303523 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.315360 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r4fbt"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.403616 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e187-account-create-update-4xb7l"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.404553 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.406789 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.421751 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e187-account-create-update-4xb7l"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.422164 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c73a-account-create-update-zxxrn" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.422229 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c73a-account-create-update-zxxrn" event={"ID":"fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88","Type":"ContainerDied","Data":"737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f"} Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.422265 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737a12bc43c5a907f8cb10150d1c5a92a48abbaf8d51eba39a18020cba56938f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.427606 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p55tt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.427810 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p55tt" event={"ID":"d81f72af-8420-4334-811e-f0e0cc1c7731","Type":"ContainerDied","Data":"1dee66111cb00be8e1d7b10f7d4c7537cb2d5b855ce8c1d8f116d504503e7207"} Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.427864 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dee66111cb00be8e1d7b10f7d4c7537cb2d5b855ce8c1d8f116d504503e7207" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.429700 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vsnt5-config-mj2kr" event={"ID":"b6410809-f775-4bf8-bc41-63f159854e76","Type":"ContainerDied","Data":"0abe3568e1e754df8609a1e142601381c885a8173a04bd7acf2a44dd4c765ac5"} Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.429740 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abe3568e1e754df8609a1e142601381c885a8173a04bd7acf2a44dd4c765ac5" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.429807 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vsnt5-config-mj2kr" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.436882 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9nmp" event={"ID":"ad0c7de4-412a-4e8c-90d0-817151c8a015","Type":"ContainerDied","Data":"b1bc87ff69bfb1fa797a7dfd6ebfbe51b81cc642abb258a9974b216355561af2"} Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.436920 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1bc87ff69bfb1fa797a7dfd6ebfbe51b81cc642abb258a9974b216355561af2" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.436993 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9nmp" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.465349 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.465488 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgfcv\" (UniqueName: \"kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.497717 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-46kds"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.498854 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.506089 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-46kds"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.567020 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.567131 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hpt\" (UniqueName: \"kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.567186 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgfcv\" (UniqueName: \"kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.567226 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.568281 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.595529 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgfcv\" (UniqueName: \"kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv\") pod \"keystone-db-create-r4fbt\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.607689 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f064-account-create-update-flh2f"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.608728 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.611674 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.621018 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f064-account-create-update-flh2f"] Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.624269 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.668765 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hpt\" (UniqueName: \"kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.668815 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.668883 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p9lk\" (UniqueName: \"kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.668916 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.669571 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.686738 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hpt\" (UniqueName: \"kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt\") pod \"keystone-e187-account-create-update-4xb7l\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.723370 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.770753 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p9lk\" (UniqueName: \"kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.770836 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.770958 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2sgk\" (UniqueName: \"kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.770985 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.771877 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.796082 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p9lk\" (UniqueName: \"kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk\") pod \"placement-db-create-46kds\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.818663 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46kds" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.874995 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.875146 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2sgk\" (UniqueName: \"kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.875984 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.896765 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2sgk\" (UniqueName: \"kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk\") pod \"placement-f064-account-create-update-flh2f\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.930763 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:28 crc kubenswrapper[4873]: I0219 10:01:28.948340 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:28 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:28 crc kubenswrapper[4873]: > Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.192351 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r4fbt"] Feb 19 10:01:29 crc kubenswrapper[4873]: W0219 10:01:29.199073 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1770246_951b_40da_a0a2_4320dde71437.slice/crio-c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2 WatchSource:0}: Error finding container c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2: Status 404 returned error can't find the container with id c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2 Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.320140 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e187-account-create-update-4xb7l"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.348948 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vsnt5-config-mj2kr"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.363326 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vsnt5-config-mj2kr"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.466954 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4fbt" event={"ID":"a1770246-951b-40da-a0a2-4320dde71437","Type":"ContainerStarted","Data":"c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2"} Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.481916 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-4xb7l" event={"ID":"af085fbb-9aaa-4d01-8a0f-a061acf3a845","Type":"ContainerStarted","Data":"182e7c202e54319407360794cf227f99f4d9087f026461b7d72de7a24ada750c"} Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.482876 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-46kds"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.514114 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6410809-f775-4bf8-bc41-63f159854e76" path="/var/lib/kubelet/pods/b6410809-f775-4bf8-bc41-63f159854e76/volumes" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.517495 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46kds" event={"ID":"6584bab0-12c6-4bce-99be-d38f3748f896","Type":"ContainerStarted","Data":"8c9bc379e5f12ad6cc9fc4a894a6c0e8ca43c8a837f75aefd0d3827ab77c8833"} Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.537494 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f064-account-create-update-flh2f"] Feb 19 10:01:29 crc kubenswrapper[4873]: W0219 10:01:29.553076 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod179cf76d_a15d_4bce_be42_18ad2e4abb94.slice/crio-32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727 WatchSource:0}: Error finding container 32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727: Status 404 returned error can't find the container with id 32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727 Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.620089 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-dftzh"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.621829 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.630557 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-dftzh"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.718086 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-087c-account-create-update-qnlsx"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.719156 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.721483 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.729811 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-087c-account-create-update-qnlsx"] Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.799510 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.799681 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hmj\" (UniqueName: \"kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.799878 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.799993 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvsx\" (UniqueName: \"kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.901554 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.901651 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsvsx\" (UniqueName: \"kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.901741 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.901794 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2hmj\" (UniqueName: \"kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.902679 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.902803 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.924165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsvsx\" (UniqueName: \"kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx\") pod \"watcher-087c-account-create-update-qnlsx\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.924443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2hmj\" (UniqueName: \"kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj\") pod \"watcher-db-create-dftzh\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:29 crc kubenswrapper[4873]: I0219 10:01:29.954562 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.075256 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:30 crc kubenswrapper[4873]: W0219 10:01:30.460855 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8b7b2e_f4a8_4af9_99aa_a1e8c3d78bd4.slice/crio-3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a WatchSource:0}: Error finding container 3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a: Status 404 returned error can't find the container with id 3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.479021 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-dftzh"] Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.509416 4873 generic.go:334] "Generic (PLEG): container finished" podID="6584bab0-12c6-4bce-99be-d38f3748f896" containerID="2be1eaacedf333b387e3ffd6dce5223b73f9487c48808cd68df4b60a3f55fd39" exitCode=0 Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.509498 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46kds" event={"ID":"6584bab0-12c6-4bce-99be-d38f3748f896","Type":"ContainerDied","Data":"2be1eaacedf333b387e3ffd6dce5223b73f9487c48808cd68df4b60a3f55fd39"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.511192 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dftzh" event={"ID":"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4","Type":"ContainerStarted","Data":"3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.513139 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4fbt" event={"ID":"a1770246-951b-40da-a0a2-4320dde71437","Type":"ContainerDied","Data":"3a7ee324cc97736a2be2ff10cda880e991b9ebce5c06108335c9156379f7a8ea"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.513211 4873 generic.go:334] "Generic (PLEG): container finished" podID="a1770246-951b-40da-a0a2-4320dde71437" containerID="3a7ee324cc97736a2be2ff10cda880e991b9ebce5c06108335c9156379f7a8ea" exitCode=0 Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.518946 4873 generic.go:334] "Generic (PLEG): container finished" podID="179cf76d-a15d-4bce-be42-18ad2e4abb94" containerID="7d6d1faa851ee46aca753c0c6509416782269dc982725ceddb2cd7f19fc16f13" exitCode=0 Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.519025 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f064-account-create-update-flh2f" event={"ID":"179cf76d-a15d-4bce-be42-18ad2e4abb94","Type":"ContainerDied","Data":"7d6d1faa851ee46aca753c0c6509416782269dc982725ceddb2cd7f19fc16f13"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.519056 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f064-account-create-update-flh2f" event={"ID":"179cf76d-a15d-4bce-be42-18ad2e4abb94","Type":"ContainerStarted","Data":"32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.522143 4873 generic.go:334] "Generic (PLEG): container finished" podID="af085fbb-9aaa-4d01-8a0f-a061acf3a845" containerID="d63b34383441b0e539673e31cf4ea017f3d4fcdbd72ad26d47bf96c33fcf565d" exitCode=0 Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.522210 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-4xb7l" event={"ID":"af085fbb-9aaa-4d01-8a0f-a061acf3a845","Type":"ContainerDied","Data":"d63b34383441b0e539673e31cf4ea017f3d4fcdbd72ad26d47bf96c33fcf565d"} Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.565577 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-087c-account-create-update-qnlsx"] Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.812800 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9nmp"] Feb 19 10:01:30 crc kubenswrapper[4873]: I0219 10:01:30.820160 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d9nmp"] Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.502070 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0c7de4-412a-4e8c-90d0-817151c8a015" path="/var/lib/kubelet/pods/ad0c7de4-412a-4e8c-90d0-817151c8a015/volumes" Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.532266 4873 generic.go:334] "Generic (PLEG): container finished" podID="7b0cc2ef-89a2-4220-8b44-7fc71537ab50" containerID="f0d3f3ad8d69a092fbacd08190bbe079ce8644eec25f2003bbee9cc3d511dd9c" exitCode=0 Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.532350 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-087c-account-create-update-qnlsx" event={"ID":"7b0cc2ef-89a2-4220-8b44-7fc71537ab50","Type":"ContainerDied","Data":"f0d3f3ad8d69a092fbacd08190bbe079ce8644eec25f2003bbee9cc3d511dd9c"} Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.532402 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-087c-account-create-update-qnlsx" event={"ID":"7b0cc2ef-89a2-4220-8b44-7fc71537ab50","Type":"ContainerStarted","Data":"3721d608d71f840cfefd584322332fd88b72dcb989cdcfcd288c6da01a5126d0"} Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.534167 4873 generic.go:334] "Generic (PLEG): container finished" podID="bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" containerID="6a370733fa679d2517624889ae788a6c37c512bf2894dbe6a54f6e24bdad6056" exitCode=0 Feb 19 10:01:31 crc kubenswrapper[4873]: I0219 10:01:31.534214 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dftzh" event={"ID":"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4","Type":"ContainerDied","Data":"6a370733fa679d2517624889ae788a6c37c512bf2894dbe6a54f6e24bdad6056"} Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.105826 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.112060 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46kds" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.117534 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.125384 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263393 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts\") pod \"179cf76d-a15d-4bce-be42-18ad2e4abb94\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263447 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2sgk\" (UniqueName: \"kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk\") pod \"179cf76d-a15d-4bce-be42-18ad2e4abb94\" (UID: \"179cf76d-a15d-4bce-be42-18ad2e4abb94\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hpt\" (UniqueName: \"kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt\") pod \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263553 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgfcv\" (UniqueName: \"kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv\") pod \"a1770246-951b-40da-a0a2-4320dde71437\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263579 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts\") pod \"6584bab0-12c6-4bce-99be-d38f3748f896\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263648 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts\") pod \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\" (UID: \"af085fbb-9aaa-4d01-8a0f-a061acf3a845\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263675 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts\") pod \"a1770246-951b-40da-a0a2-4320dde71437\" (UID: \"a1770246-951b-40da-a0a2-4320dde71437\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.263733 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p9lk\" (UniqueName: \"kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk\") pod \"6584bab0-12c6-4bce-99be-d38f3748f896\" (UID: \"6584bab0-12c6-4bce-99be-d38f3748f896\") " Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.264331 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "179cf76d-a15d-4bce-be42-18ad2e4abb94" (UID: "179cf76d-a15d-4bce-be42-18ad2e4abb94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.264516 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af085fbb-9aaa-4d01-8a0f-a061acf3a845" (UID: "af085fbb-9aaa-4d01-8a0f-a061acf3a845"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.264540 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1770246-951b-40da-a0a2-4320dde71437" (UID: "a1770246-951b-40da-a0a2-4320dde71437"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.264873 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6584bab0-12c6-4bce-99be-d38f3748f896" (UID: "6584bab0-12c6-4bce-99be-d38f3748f896"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.270005 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk" (OuterVolumeSpecName: "kube-api-access-5p9lk") pod "6584bab0-12c6-4bce-99be-d38f3748f896" (UID: "6584bab0-12c6-4bce-99be-d38f3748f896"). InnerVolumeSpecName "kube-api-access-5p9lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.270434 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv" (OuterVolumeSpecName: "kube-api-access-xgfcv") pod "a1770246-951b-40da-a0a2-4320dde71437" (UID: "a1770246-951b-40da-a0a2-4320dde71437"). InnerVolumeSpecName "kube-api-access-xgfcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.272340 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt" (OuterVolumeSpecName: "kube-api-access-g4hpt") pod "af085fbb-9aaa-4d01-8a0f-a061acf3a845" (UID: "af085fbb-9aaa-4d01-8a0f-a061acf3a845"). InnerVolumeSpecName "kube-api-access-g4hpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.276362 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk" (OuterVolumeSpecName: "kube-api-access-p2sgk") pod "179cf76d-a15d-4bce-be42-18ad2e4abb94" (UID: "179cf76d-a15d-4bce-be42-18ad2e4abb94"). InnerVolumeSpecName "kube-api-access-p2sgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365807 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179cf76d-a15d-4bce-be42-18ad2e4abb94-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365850 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2sgk\" (UniqueName: \"kubernetes.io/projected/179cf76d-a15d-4bce-be42-18ad2e4abb94-kube-api-access-p2sgk\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365864 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hpt\" (UniqueName: \"kubernetes.io/projected/af085fbb-9aaa-4d01-8a0f-a061acf3a845-kube-api-access-g4hpt\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365872 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgfcv\" (UniqueName: \"kubernetes.io/projected/a1770246-951b-40da-a0a2-4320dde71437-kube-api-access-xgfcv\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365883 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6584bab0-12c6-4bce-99be-d38f3748f896-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365894 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af085fbb-9aaa-4d01-8a0f-a061acf3a845-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365904 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1770246-951b-40da-a0a2-4320dde71437-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.365915 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p9lk\" (UniqueName: \"kubernetes.io/projected/6584bab0-12c6-4bce-99be-d38f3748f896-kube-api-access-5p9lk\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.544062 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r4fbt" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.544112 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r4fbt" event={"ID":"a1770246-951b-40da-a0a2-4320dde71437","Type":"ContainerDied","Data":"c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2"} Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.544153 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00ae3a8c5b91d1638f081e9179c87fa9241c86e2f6597e4bda03d4a8763efb2" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.546206 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f064-account-create-update-flh2f" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.546202 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f064-account-create-update-flh2f" event={"ID":"179cf76d-a15d-4bce-be42-18ad2e4abb94","Type":"ContainerDied","Data":"32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727"} Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.546388 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cc1e04ecdfcb01e465840967113b749d5b3dd117345a4527603f742b0f0727" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.548601 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-4xb7l" event={"ID":"af085fbb-9aaa-4d01-8a0f-a061acf3a845","Type":"ContainerDied","Data":"182e7c202e54319407360794cf227f99f4d9087f026461b7d72de7a24ada750c"} Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.548633 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182e7c202e54319407360794cf227f99f4d9087f026461b7d72de7a24ada750c" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.548634 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-4xb7l" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.551000 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-46kds" event={"ID":"6584bab0-12c6-4bce-99be-d38f3748f896","Type":"ContainerDied","Data":"8c9bc379e5f12ad6cc9fc4a894a6c0e8ca43c8a837f75aefd0d3827ab77c8833"} Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.551056 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9bc379e5f12ad6cc9fc4a894a6c0e8ca43c8a837f75aefd0d3827ab77c8833" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.551118 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-46kds" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.675022 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.683361 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3b21a02-7162-42ca-84cf-e0fa36b04a22-etc-swift\") pod \"swift-storage-0\" (UID: \"c3b21a02-7162-42ca-84cf-e0fa36b04a22\") " pod="openstack/swift-storage-0" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.689701 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9472r"] Feb 19 10:01:32 crc kubenswrapper[4873]: E0219 10:01:32.690590 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179cf76d-a15d-4bce-be42-18ad2e4abb94" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690614 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="179cf76d-a15d-4bce-be42-18ad2e4abb94" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: E0219 10:01:32.690646 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af085fbb-9aaa-4d01-8a0f-a061acf3a845" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690656 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="af085fbb-9aaa-4d01-8a0f-a061acf3a845" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: E0219 10:01:32.690673 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6584bab0-12c6-4bce-99be-d38f3748f896" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690678 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6584bab0-12c6-4bce-99be-d38f3748f896" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: E0219 10:01:32.690688 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1770246-951b-40da-a0a2-4320dde71437" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690694 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1770246-951b-40da-a0a2-4320dde71437" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690886 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="af085fbb-9aaa-4d01-8a0f-a061acf3a845" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690902 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1770246-951b-40da-a0a2-4320dde71437" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690910 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="179cf76d-a15d-4bce-be42-18ad2e4abb94" containerName="mariadb-account-create-update" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.690920 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="6584bab0-12c6-4bce-99be-d38f3748f896" containerName="mariadb-database-create" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.691518 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.694790 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n9qxt" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.694805 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.711656 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9472r"] Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.878144 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.879596 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.879689 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.879750 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfz7\" (UniqueName: \"kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.896462 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.981071 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.981236 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.981268 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.981292 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btfz7\" (UniqueName: \"kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.987477 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.987802 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:32 crc kubenswrapper[4873]: I0219 10:01:32.996022 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.015584 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btfz7\" (UniqueName: \"kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7\") pod \"glance-db-sync-9472r\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " pod="openstack/glance-db-sync-9472r" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.163602 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.190838 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.284567 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts\") pod \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.284904 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2hmj\" (UniqueName: \"kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj\") pod \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\" (UID: \"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4\") " Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.285746 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" (UID: "bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.290159 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj" (OuterVolumeSpecName: "kube-api-access-q2hmj") pod "bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" (UID: "bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4"). InnerVolumeSpecName "kube-api-access-q2hmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.312835 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9472r" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.389513 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsvsx\" (UniqueName: \"kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx\") pod \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.389583 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts\") pod \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\" (UID: \"7b0cc2ef-89a2-4220-8b44-7fc71537ab50\") " Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.389823 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.389835 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2hmj\" (UniqueName: \"kubernetes.io/projected/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4-kube-api-access-q2hmj\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.390120 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b0cc2ef-89a2-4220-8b44-7fc71537ab50" (UID: "7b0cc2ef-89a2-4220-8b44-7fc71537ab50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.395487 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx" (OuterVolumeSpecName: "kube-api-access-nsvsx") pod "7b0cc2ef-89a2-4220-8b44-7fc71537ab50" (UID: "7b0cc2ef-89a2-4220-8b44-7fc71537ab50"). InnerVolumeSpecName "kube-api-access-nsvsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.493435 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.493907 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsvsx\" (UniqueName: \"kubernetes.io/projected/7b0cc2ef-89a2-4220-8b44-7fc71537ab50-kube-api-access-nsvsx\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.569426 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-087c-account-create-update-qnlsx" event={"ID":"7b0cc2ef-89a2-4220-8b44-7fc71537ab50","Type":"ContainerDied","Data":"3721d608d71f840cfefd584322332fd88b72dcb989cdcfcd288c6da01a5126d0"} Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.569481 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3721d608d71f840cfefd584322332fd88b72dcb989cdcfcd288c6da01a5126d0" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.569581 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-087c-account-create-update-qnlsx" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.582891 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dftzh" event={"ID":"bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4","Type":"ContainerDied","Data":"3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a"} Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.582937 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb30dc7ce6a38d18c43fa6669a1fe45b71d1e88c10fa39a798d51dbbdc8535a" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.583024 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dftzh" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.636750 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="da89f0ff-c51c-4c4a-8df4-f7787d29ddd2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.750955 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.951293 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9472r"] Feb 19 10:01:33 crc kubenswrapper[4873]: W0219 10:01:33.964535 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce0a8b9_b7a8_4ee7_8d68_0e6145ada6ba.slice/crio-e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b WatchSource:0}: Error finding container e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b: Status 404 returned error can't find the container with id e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b Feb 19 10:01:33 crc kubenswrapper[4873]: I0219 10:01:33.978715 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 19 10:01:34 crc kubenswrapper[4873]: I0219 10:01:34.326816 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 19 10:01:34 crc kubenswrapper[4873]: I0219 10:01:34.596981 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9472r" event={"ID":"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba","Type":"ContainerStarted","Data":"e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b"} Feb 19 10:01:34 crc kubenswrapper[4873]: I0219 10:01:34.600641 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"38196c7be30e9f61335fa1f0192f051e1628db316ccf2a1bfb000ba2cf14f1d1"} Feb 19 10:01:34 crc kubenswrapper[4873]: I0219 10:01:34.600675 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"f31977bf19b6ff3e5baded4225c4014777cdf792cf2977b06fd51396f8c73011"} Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.619669 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"c42f18c86fb159a11eb5d3fb83148d0afcb1fa64baa24bda8073ec6b339ef356"} Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.620000 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"00a173a5adc7f8b49c8265c3cd1d39f7fab154d134782f52f029ed762c47ec91"} Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.620011 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"7662dd8178d787f4ea3d1a4ae3858a1da893475dae8593dd22bb93a83b9bc95b"} Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.831310 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kvmj2"] Feb 19 10:01:35 crc kubenswrapper[4873]: E0219 10:01:35.831613 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0cc2ef-89a2-4220-8b44-7fc71537ab50" containerName="mariadb-account-create-update" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.831625 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0cc2ef-89a2-4220-8b44-7fc71537ab50" containerName="mariadb-account-create-update" Feb 19 10:01:35 crc kubenswrapper[4873]: E0219 10:01:35.831653 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" containerName="mariadb-database-create" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.831658 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" containerName="mariadb-database-create" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.831817 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" containerName="mariadb-database-create" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.831836 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0cc2ef-89a2-4220-8b44-7fc71537ab50" containerName="mariadb-account-create-update" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.832365 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.838769 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.839142 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbw2\" (UniqueName: \"kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.839949 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.841055 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kvmj2"] Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.865636 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.871206 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.953414 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbw2\" (UniqueName: \"kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.953664 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:35 crc kubenswrapper[4873]: I0219 10:01:35.954907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:36 crc kubenswrapper[4873]: I0219 10:01:36.002319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbw2\" (UniqueName: \"kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2\") pod \"root-account-create-update-kvmj2\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:36 crc kubenswrapper[4873]: I0219 10:01:36.150222 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:36 crc kubenswrapper[4873]: I0219 10:01:36.638495 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:36 crc kubenswrapper[4873]: I0219 10:01:36.899273 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kvmj2"] Feb 19 10:01:36 crc kubenswrapper[4873]: W0219 10:01:36.913297 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29a6680f_7e8e_4326_9401_fde957599477.slice/crio-f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6 WatchSource:0}: Error finding container f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6: Status 404 returned error can't find the container with id f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6 Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.647090 4873 generic.go:334] "Generic (PLEG): container finished" podID="29a6680f-7e8e-4326-9401-fde957599477" containerID="97c6c0035f5f6c9762dd68933f3909de6f99dfa1fe212cf2c55b0644dfffdb93" exitCode=0 Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.647306 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvmj2" event={"ID":"29a6680f-7e8e-4326-9401-fde957599477","Type":"ContainerDied","Data":"97c6c0035f5f6c9762dd68933f3909de6f99dfa1fe212cf2c55b0644dfffdb93"} Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.647549 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvmj2" event={"ID":"29a6680f-7e8e-4326-9401-fde957599477","Type":"ContainerStarted","Data":"f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6"} Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.662963 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"d0826b7651dda0e678af49c63026f36047f31ce626d2571624180f0bdc91d047"} Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.663011 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"6551ae035c3222119a9866389eed635be426d6bf76298476c494a0579571f69e"} Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.663041 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"105e37c7b1a18d3d62b5ba80f1c54dbbad88863bd679c19a92a9d87a6935d1d9"} Feb 19 10:01:37 crc kubenswrapper[4873]: I0219 10:01:37.663053 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"5b1d1cf50f8beb3ecbd3ab03983d6ba7a17f20f00ccee08ee1f87637cf0af12b"} Feb 19 10:01:38 crc kubenswrapper[4873]: I0219 10:01:38.963778 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:38 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:38 crc kubenswrapper[4873]: > Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.371372 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.517844 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbw2\" (UniqueName: \"kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2\") pod \"29a6680f-7e8e-4326-9401-fde957599477\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.518331 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts\") pod \"29a6680f-7e8e-4326-9401-fde957599477\" (UID: \"29a6680f-7e8e-4326-9401-fde957599477\") " Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.519576 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29a6680f-7e8e-4326-9401-fde957599477" (UID: "29a6680f-7e8e-4326-9401-fde957599477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.525113 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2" (OuterVolumeSpecName: "kube-api-access-4gbw2") pod "29a6680f-7e8e-4326-9401-fde957599477" (UID: "29a6680f-7e8e-4326-9401-fde957599477"). InnerVolumeSpecName "kube-api-access-4gbw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.620368 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29a6680f-7e8e-4326-9401-fde957599477-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.620401 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gbw2\" (UniqueName: \"kubernetes.io/projected/29a6680f-7e8e-4326-9401-fde957599477-kube-api-access-4gbw2\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.691487 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"43241f4e3031b74ccbe2321325723ab3e532db7807ed1e957e9fd210bc9f3828"} Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.691530 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"453e54863e0fc731cb1decd734e78f4081a88821601c764cfce9b82cbbf25eda"} Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.693788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kvmj2" event={"ID":"29a6680f-7e8e-4326-9401-fde957599477","Type":"ContainerDied","Data":"f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6"} Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.693817 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d693699de1f87f0e2baf8cbe839d21813317244d1f8a537b02c8e0a41220d6" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.693865 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kvmj2" Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.724749 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.725167 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="prometheus" containerID="cri-o://8c1b23b8ffccae7306c11a4c3d6415747d09b3586d46bdcc68e728a0936c32a0" gracePeriod=600 Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.725255 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="config-reloader" containerID="cri-o://25a26e338e379f85c4c91e347569e9af0c97d68170a522acb020aee8d309e23c" gracePeriod=600 Feb 19 10:01:39 crc kubenswrapper[4873]: I0219 10:01:39.725275 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="thanos-sidecar" containerID="cri-o://7feaa15c2585dfd6c2a4ea45a4afeb2729895aad024065a30a27359792d333ac" gracePeriod=600 Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.711772 4873 generic.go:334] "Generic (PLEG): container finished" podID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerID="7feaa15c2585dfd6c2a4ea45a4afeb2729895aad024065a30a27359792d333ac" exitCode=0 Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.712239 4873 generic.go:334] "Generic (PLEG): container finished" podID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerID="25a26e338e379f85c4c91e347569e9af0c97d68170a522acb020aee8d309e23c" exitCode=0 Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.711863 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerDied","Data":"7feaa15c2585dfd6c2a4ea45a4afeb2729895aad024065a30a27359792d333ac"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.712285 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerDied","Data":"25a26e338e379f85c4c91e347569e9af0c97d68170a522acb020aee8d309e23c"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.712298 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerDied","Data":"8c1b23b8ffccae7306c11a4c3d6415747d09b3586d46bdcc68e728a0936c32a0"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.712255 4873 generic.go:334] "Generic (PLEG): container finished" podID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerID="8c1b23b8ffccae7306c11a4c3d6415747d09b3586d46bdcc68e728a0936c32a0" exitCode=0 Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.718630 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"c294b5748f2caa3d5e07d1a9ff34e328e71a1bbd65030e32834bac700fb4e372"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.718659 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"3ecd75568a15b9478406c53fa7acd0f225ffe936f015c69c59c50c3d47a6b018"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.718670 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"fff6f6a1a6441e525838d9470dedf04959877dc822b5e0d1de8610b7ddb6d89e"} Feb 19 10:01:40 crc kubenswrapper[4873]: I0219 10:01:40.953424 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.066940 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067021 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067267 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067352 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067413 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067442 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067464 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067506 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz72n\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067533 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.068291 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2\") pod \"0b46b116-4858-4b6a-b3ad-9337272f9a91\" (UID: \"0b46b116-4858-4b6a-b3ad-9337272f9a91\") " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.067540 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.068675 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.069225 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.073298 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config" (OuterVolumeSpecName: "config") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.075615 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.075652 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.075665 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0b46b116-4858-4b6a-b3ad-9337272f9a91-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.075681 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.077358 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.077762 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out" (OuterVolumeSpecName: "config-out") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.078302 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.082310 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n" (OuterVolumeSpecName: "kube-api-access-hz72n") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "kube-api-access-hz72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.111067 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.111450 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config" (OuterVolumeSpecName: "web-config") pod "0b46b116-4858-4b6a-b3ad-9337272f9a91" (UID: "0b46b116-4858-4b6a-b3ad-9337272f9a91"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176657 4873 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176688 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz72n\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-kube-api-access-hz72n\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176699 4873 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0b46b116-4858-4b6a-b3ad-9337272f9a91-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176727 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") on node \"crc\" " Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176739 4873 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0b46b116-4858-4b6a-b3ad-9337272f9a91-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.176748 4873 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0b46b116-4858-4b6a-b3ad-9337272f9a91-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.198549 4873 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.199329 4873 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d") on node "crc" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.277715 4873 reconciler_common.go:293] "Volume detached for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.739505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"8399a72c47aa3eb27fa21a10ff0df17158123295d4b9c5fef14b8a272df52586"} Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.740036 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c3b21a02-7162-42ca-84cf-e0fa36b04a22","Type":"ContainerStarted","Data":"aa95f52e5f68957729c900c17b88f03f170d472e846005e5e793cf71068d51c7"} Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.745471 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0b46b116-4858-4b6a-b3ad-9337272f9a91","Type":"ContainerDied","Data":"3d0b7f98084ff77ff34a64d3b9fb32fc7993ea571d51b6cb0b24962f0fd5c9ef"} Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.745505 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.745549 4873 scope.go:117] "RemoveContainer" containerID="7feaa15c2585dfd6c2a4ea45a4afeb2729895aad024065a30a27359792d333ac" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.781534 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.304896969 podStartE2EDuration="42.781513688s" podCreationTimestamp="2026-02-19 10:00:59 +0000 UTC" firstStartedPulling="2026-02-19 10:01:33.757203154 +0000 UTC m=+1003.046634792" lastFinishedPulling="2026-02-19 10:01:39.233819873 +0000 UTC m=+1008.523251511" observedRunningTime="2026-02-19 10:01:41.775851278 +0000 UTC m=+1011.065282926" watchObservedRunningTime="2026-02-19 10:01:41.781513688 +0000 UTC m=+1011.070945326" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.802175 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.810857 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.850512 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:41 crc kubenswrapper[4873]: E0219 10:01:41.853330 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="thanos-sidecar" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.853364 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="thanos-sidecar" Feb 19 10:01:41 crc kubenswrapper[4873]: E0219 10:01:41.853390 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a6680f-7e8e-4326-9401-fde957599477" containerName="mariadb-account-create-update" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.853401 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a6680f-7e8e-4326-9401-fde957599477" containerName="mariadb-account-create-update" Feb 19 10:01:41 crc kubenswrapper[4873]: E0219 10:01:41.853421 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="init-config-reloader" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.853429 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="init-config-reloader" Feb 19 10:01:41 crc kubenswrapper[4873]: E0219 10:01:41.853457 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="config-reloader" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.853465 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="config-reloader" Feb 19 10:01:41 crc kubenswrapper[4873]: E0219 10:01:41.853494 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="prometheus" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.853502 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="prometheus" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.855198 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="prometheus" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.855242 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="config-reloader" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.855263 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="thanos-sidecar" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.855279 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a6680f-7e8e-4326-9401-fde957599477" containerName="mariadb-account-create-update" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.861131 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.869368 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.871553 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.871695 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.871955 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.872181 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.872872 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.873736 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-stpz9" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.873924 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.890966 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:01:41 crc kubenswrapper[4873]: I0219 10:01:41.893888 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.004414 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.004762 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.004958 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005126 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005288 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005456 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005607 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005771 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.005955 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.006134 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.006295 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.006494 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.006661 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snd6v\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.100877 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.102600 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.107700 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108387 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108410 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108446 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108476 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108493 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108514 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108539 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108559 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snd6v\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108602 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108627 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108653 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108674 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.108696 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.111328 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.112000 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.112666 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.118547 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.121405 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.123149 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.127594 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.128754 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.130209 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.130223 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.139265 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.141044 4873 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.141092 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/668a04d4437b4137f130ddea3fc0a68c22db655664b336b39ceb124bf62a44ab/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.141521 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.156775 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snd6v\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.192264 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.209880 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.209953 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.209980 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6b5\" (UniqueName: \"kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.210005 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.210186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.210250 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.312081 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.312147 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6b5\" (UniqueName: \"kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.312194 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.312262 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.312680 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.313416 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.313453 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.313515 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.313580 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.314148 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.314397 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.341208 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6b5\" (UniqueName: \"kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5\") pod \"dnsmasq-dns-5f77dfd79f-tg9w4\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.489893 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:42 crc kubenswrapper[4873]: I0219 10:01:42.527553 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:43 crc kubenswrapper[4873]: I0219 10:01:43.497153 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" path="/var/lib/kubelet/pods/0b46b116-4858-4b6a-b3ad-9337272f9a91/volumes" Feb 19 10:01:43 crc kubenswrapper[4873]: I0219 10:01:43.638991 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 19 10:01:43 crc kubenswrapper[4873]: I0219 10:01:43.864655 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="0b46b116-4858-4b6a-b3ad-9337272f9a91" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.109:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:01:43 crc kubenswrapper[4873]: I0219 10:01:43.978399 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:01:44 crc kubenswrapper[4873]: I0219 10:01:44.327264 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.791804 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-f5jnw"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.793187 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.819307 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f5jnw"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.894372 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.894489 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfk7\" (UniqueName: \"kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.895040 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-63c5-account-create-update-rqftk"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.896177 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.899004 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.907863 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-63c5-account-create-update-rqftk"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.962608 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-86n9s"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.967180 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.969576 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-27d74" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.970176 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.971041 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.971331 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.974674 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86n9s"] Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.995489 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfk7\" (UniqueName: \"kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.995577 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.995616 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sht5z\" (UniqueName: \"kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.995675 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:45 crc kubenswrapper[4873]: I0219 10:01:45.996405 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.011570 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nfk5h"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.012764 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.029612 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfk5h"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.037518 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfk7\" (UniqueName: \"kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7\") pod \"barbican-db-create-f5jnw\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097188 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qhl\" (UniqueName: \"kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097266 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hbs\" (UniqueName: \"kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097309 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097377 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097397 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.097421 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sht5z\" (UniqueName: \"kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.098236 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.101966 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-73d0-account-create-update-vwj8q"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.103179 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.109142 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.114715 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f5jnw" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.117827 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-73d0-account-create-update-vwj8q"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.139063 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sht5z\" (UniqueName: \"kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z\") pod \"barbican-63c5-account-create-update-rqftk\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199077 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199171 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdc5\" (UniqueName: \"kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199238 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199260 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qhl\" (UniqueName: \"kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199311 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hbs\" (UniqueName: \"kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199357 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.199393 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.200295 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.203748 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.216797 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.218927 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.228521 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hbs\" (UniqueName: \"kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs\") pod \"neutron-db-create-nfk5h\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.243573 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qhl\" (UniqueName: \"kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl\") pod \"keystone-db-sync-86n9s\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.292442 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86n9s" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.300830 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdc5\" (UniqueName: \"kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.301001 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.301738 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.318334 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdc5\" (UniqueName: \"kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5\") pod \"neutron-73d0-account-create-update-vwj8q\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.327070 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfk5h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.400622 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w65h5"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.403139 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.416504 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w65h5"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.427478 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.502372 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-45f0-account-create-update-b4rvj"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.503452 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.508013 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wv4\" (UniqueName: \"kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.508201 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.519126 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.524746 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45f0-account-create-update-b4rvj"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.611224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wv4\" (UniqueName: \"kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.611316 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5pl\" (UniqueName: \"kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.611372 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.611407 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.612246 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.647009 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wv4\" (UniqueName: \"kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4\") pod \"cinder-db-create-w65h5\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.691954 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-k6j2h"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.692996 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.695864 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-m755p" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.702405 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713076 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713169 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5pl\" (UniqueName: \"kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713244 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713282 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713309 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.713343 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66wv\" (UniqueName: \"kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.714365 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.718458 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-k6j2h"] Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.739520 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w65h5" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.751248 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5pl\" (UniqueName: \"kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl\") pod \"cinder-45f0-account-create-update-b4rvj\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.814788 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66wv\" (UniqueName: \"kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.814882 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.814941 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.814969 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.820519 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.820565 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.828297 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.833514 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:01:46 crc kubenswrapper[4873]: I0219 10:01:46.853705 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66wv\" (UniqueName: \"kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv\") pod \"watcher-db-sync-k6j2h\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:47 crc kubenswrapper[4873]: I0219 10:01:47.011677 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:01:48 crc kubenswrapper[4873]: I0219 10:01:48.969354 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" probeResult="failure" output=< Feb 19 10:01:48 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:01:48 crc kubenswrapper[4873]: > Feb 19 10:01:51 crc kubenswrapper[4873]: I0219 10:01:51.715185 4873 scope.go:117] "RemoveContainer" containerID="25a26e338e379f85c4c91e347569e9af0c97d68170a522acb020aee8d309e23c" Feb 19 10:01:51 crc kubenswrapper[4873]: E0219 10:01:51.736195 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Feb 19 10:01:51 crc kubenswrapper[4873]: E0219 10:01:51.736265 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Feb 19 10:01:51 crc kubenswrapper[4873]: E0219 10:01:51.736441 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.20:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btfz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9472r_openstack(7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:01:51 crc kubenswrapper[4873]: E0219 10:01:51.737634 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9472r" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" Feb 19 10:01:51 crc kubenswrapper[4873]: E0219 10:01:51.873866 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9472r" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" Feb 19 10:01:51 crc kubenswrapper[4873]: I0219 10:01:51.874254 4873 scope.go:117] "RemoveContainer" containerID="8c1b23b8ffccae7306c11a4c3d6415747d09b3586d46bdcc68e728a0936c32a0" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.097149 4873 scope.go:117] "RemoveContainer" containerID="7b07d5d4936c52da51ddc101b9f0cc93c881b6b0ec5f359e20ad961f5451f0a9" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.428487 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-73d0-account-create-update-vwj8q"] Feb 19 10:01:52 crc kubenswrapper[4873]: W0219 10:01:52.435724 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7769ae_caf0_4f62_be96_90d6fa334259.slice/crio-37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a WatchSource:0}: Error finding container 37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a: Status 404 returned error can't find the container with id 37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.438571 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-f5jnw"] Feb 19 10:01:52 crc kubenswrapper[4873]: W0219 10:01:52.440479 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec120760_bb10_44ff_bbb0_ed1665b4e17b.slice/crio-dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782 WatchSource:0}: Error finding container dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782: Status 404 returned error can't find the container with id dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782 Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.441223 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.448357 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.453490 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-63c5-account-create-update-rqftk"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.600081 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-86n9s"] Feb 19 10:01:52 crc kubenswrapper[4873]: W0219 10:01:52.603931 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94c35b26_7dc1_4cea_bbe7_53a9e47df7ba.slice/crio-6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc WatchSource:0}: Error finding container 6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc: Status 404 returned error can't find the container with id 6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.652855 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w65h5"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.667159 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.691403 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfk5h"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.711699 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.721024 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-k6j2h"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.730413 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45f0-account-create-update-b4rvj"] Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.806878 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:01:52 crc kubenswrapper[4873]: W0219 10:01:52.825700 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8c0292_715e_4d4d_a552_5229adfc3e74.slice/crio-04cdcce41cf06a6a4c22d13c5a42c60370cf2135128656d07548457adad958ae WatchSource:0}: Error finding container 04cdcce41cf06a6a4c22d13c5a42c60370cf2135128656d07548457adad958ae: Status 404 returned error can't find the container with id 04cdcce41cf06a6a4c22d13c5a42c60370cf2135128656d07548457adad958ae Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.863444 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfk5h" event={"ID":"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba","Type":"ContainerStarted","Data":"6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.865697 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerStarted","Data":"67a74c25d6b44ccd6cb397b300a6cd2025bf7fa88890d389ef81197cfb4ef22d"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.867515 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w65h5" event={"ID":"e1f97f25-d006-40d7-a090-ab45ab11b282","Type":"ContainerStarted","Data":"c2bf0fbba2f57f774b52845ad4c11caabc24a49278d94ff2cd48b137e6aeb541"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.867551 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w65h5" event={"ID":"e1f97f25-d006-40d7-a090-ab45ab11b282","Type":"ContainerStarted","Data":"6ff99127cb67f2960d3ae8f41a09b021685c8e4644ccb65876e46a1f18c275f3"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.872272 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86n9s" event={"ID":"735c003d-082d-431f-9906-20c8946f1bf4","Type":"ContainerStarted","Data":"e7a7027c2775e72cf8a30257c067cda81d18ed0d292fcd29a26190fc119c919e"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.877523 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k6j2h" event={"ID":"a075072a-1153-4963-91c7-e9e2aa08f988","Type":"ContainerStarted","Data":"5743e2884f1a9a199fc04d17b81aca13969dd3eeccc42a1ad3851609621f80db"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.879413 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f0-account-create-update-b4rvj" event={"ID":"bf0daf0d-c150-49de-98af-3f65dd78112f","Type":"ContainerStarted","Data":"eb552ca98ac78dec98734f67b2cfc6dd764f57bca299edb1d904e9f18a03a5a9"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.887501 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-63c5-account-create-update-rqftk" event={"ID":"679f69ef-9960-4e33-a6aa-09baefabc417","Type":"ContainerStarted","Data":"c237b902a14f464df461cba85ac2f3875c00ea9082d53eccd8620f9ff36dbdf1"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.887538 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-63c5-account-create-update-rqftk" event={"ID":"679f69ef-9960-4e33-a6aa-09baefabc417","Type":"ContainerStarted","Data":"8ba33e912f464ea0722df73f3baa819208f1910932806de73fe2ef0c44ba7498"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.892931 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" event={"ID":"8e8c0292-715e-4d4d-a552-5229adfc3e74","Type":"ContainerStarted","Data":"04cdcce41cf06a6a4c22d13c5a42c60370cf2135128656d07548457adad958ae"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.894677 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-w65h5" podStartSLOduration=6.894663674 podStartE2EDuration="6.894663674s" podCreationTimestamp="2026-02-19 10:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:52.885838785 +0000 UTC m=+1022.175270423" watchObservedRunningTime="2026-02-19 10:01:52.894663674 +0000 UTC m=+1022.184095312" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.895484 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73d0-account-create-update-vwj8q" event={"ID":"fd7769ae-caf0-4f62-be96-90d6fa334259","Type":"ContainerStarted","Data":"fc653719445b1a5b46a480337ffb17668e9e8b070f487de6554f5c2e305620c3"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.895510 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73d0-account-create-update-vwj8q" event={"ID":"fd7769ae-caf0-4f62-be96-90d6fa334259","Type":"ContainerStarted","Data":"37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.897839 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f5jnw" event={"ID":"ec120760-bb10-44ff-bbb0-ed1665b4e17b","Type":"ContainerStarted","Data":"270c7b1e210c60f9930081568a2a368a094d153be46a4131b2c800c6cabb0758"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.897869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f5jnw" event={"ID":"ec120760-bb10-44ff-bbb0-ed1665b4e17b","Type":"ContainerStarted","Data":"dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782"} Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.905880 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-63c5-account-create-update-rqftk" podStartSLOduration=7.905857902 podStartE2EDuration="7.905857902s" podCreationTimestamp="2026-02-19 10:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:52.899880774 +0000 UTC m=+1022.189312412" watchObservedRunningTime="2026-02-19 10:01:52.905857902 +0000 UTC m=+1022.195289540" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.930435 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-f5jnw" podStartSLOduration=7.930392991 podStartE2EDuration="7.930392991s" podCreationTimestamp="2026-02-19 10:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:52.927184851 +0000 UTC m=+1022.216616489" watchObservedRunningTime="2026-02-19 10:01:52.930392991 +0000 UTC m=+1022.219824629" Feb 19 10:01:52 crc kubenswrapper[4873]: I0219 10:01:52.946127 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-73d0-account-create-update-vwj8q" podStartSLOduration=6.94607835 podStartE2EDuration="6.94607835s" podCreationTimestamp="2026-02-19 10:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:52.940348678 +0000 UTC m=+1022.229780316" watchObservedRunningTime="2026-02-19 10:01:52.94607835 +0000 UTC m=+1022.235509988" Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.907095 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerID="c38f23c9308a52dc889562a59a6b3d3134f3aebd40d9ab2a2804a839bf127153" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.907328 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" event={"ID":"8e8c0292-715e-4d4d-a552-5229adfc3e74","Type":"ContainerDied","Data":"c38f23c9308a52dc889562a59a6b3d3134f3aebd40d9ab2a2804a839bf127153"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.912042 4873 generic.go:334] "Generic (PLEG): container finished" podID="bf0daf0d-c150-49de-98af-3f65dd78112f" containerID="88965eb31897e7c9f4b9aa04da422e3396b97ead67a5f74aaa92bd82cf049dc5" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.912287 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f0-account-create-update-b4rvj" event={"ID":"bf0daf0d-c150-49de-98af-3f65dd78112f","Type":"ContainerDied","Data":"88965eb31897e7c9f4b9aa04da422e3396b97ead67a5f74aaa92bd82cf049dc5"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.928745 4873 generic.go:334] "Generic (PLEG): container finished" podID="fd7769ae-caf0-4f62-be96-90d6fa334259" containerID="fc653719445b1a5b46a480337ffb17668e9e8b070f487de6554f5c2e305620c3" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.928820 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73d0-account-create-update-vwj8q" event={"ID":"fd7769ae-caf0-4f62-be96-90d6fa334259","Type":"ContainerDied","Data":"fc653719445b1a5b46a480337ffb17668e9e8b070f487de6554f5c2e305620c3"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.942923 4873 generic.go:334] "Generic (PLEG): container finished" podID="ec120760-bb10-44ff-bbb0-ed1665b4e17b" containerID="270c7b1e210c60f9930081568a2a368a094d153be46a4131b2c800c6cabb0758" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.942986 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f5jnw" event={"ID":"ec120760-bb10-44ff-bbb0-ed1665b4e17b","Type":"ContainerDied","Data":"270c7b1e210c60f9930081568a2a368a094d153be46a4131b2c800c6cabb0758"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.959269 4873 generic.go:334] "Generic (PLEG): container finished" podID="e1f97f25-d006-40d7-a090-ab45ab11b282" containerID="c2bf0fbba2f57f774b52845ad4c11caabc24a49278d94ff2cd48b137e6aeb541" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.959346 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w65h5" event={"ID":"e1f97f25-d006-40d7-a090-ab45ab11b282","Type":"ContainerDied","Data":"c2bf0fbba2f57f774b52845ad4c11caabc24a49278d94ff2cd48b137e6aeb541"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.967868 4873 generic.go:334] "Generic (PLEG): container finished" podID="679f69ef-9960-4e33-a6aa-09baefabc417" containerID="c237b902a14f464df461cba85ac2f3875c00ea9082d53eccd8620f9ff36dbdf1" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.967921 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-63c5-account-create-update-rqftk" event={"ID":"679f69ef-9960-4e33-a6aa-09baefabc417","Type":"ContainerDied","Data":"c237b902a14f464df461cba85ac2f3875c00ea9082d53eccd8620f9ff36dbdf1"} Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.974128 4873 generic.go:334] "Generic (PLEG): container finished" podID="94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" containerID="7d9cb5cecd99aa90e0a6558ac0a3e7fa7ae0c94550c983a65f7942335964abac" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4873]: I0219 10:01:53.974155 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfk5h" event={"ID":"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba","Type":"ContainerDied","Data":"7d9cb5cecd99aa90e0a6558ac0a3e7fa7ae0c94550c983a65f7942335964abac"} Feb 19 10:01:54 crc kubenswrapper[4873]: I0219 10:01:54.987418 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" event={"ID":"8e8c0292-715e-4d4d-a552-5229adfc3e74","Type":"ContainerStarted","Data":"3202df88b506237a1560baea9fc86854fa472069f50ad6c6f94a7855eaa6ff1a"} Feb 19 10:01:54 crc kubenswrapper[4873]: I0219 10:01:54.988190 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:01:55 crc kubenswrapper[4873]: I0219 10:01:55.011720 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podStartSLOduration=13.011701127 podStartE2EDuration="13.011701127s" podCreationTimestamp="2026-02-19 10:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:55.009387479 +0000 UTC m=+1024.298819127" watchObservedRunningTime="2026-02-19 10:01:55.011701127 +0000 UTC m=+1024.301132775" Feb 19 10:01:55 crc kubenswrapper[4873]: I0219 10:01:55.995473 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerStarted","Data":"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52"} Feb 19 10:01:57 crc kubenswrapper[4873]: I0219 10:01:57.979868 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:01:58 crc kubenswrapper[4873]: I0219 10:01:58.060438 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:01:58 crc kubenswrapper[4873]: I0219 10:01:58.225438 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:01:59 crc kubenswrapper[4873]: I0219 10:01:59.043620 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmkgp" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" containerID="cri-o://a571f36e40c7ad937de6c80c4fe2960d3a56ca51d705e8212a16e2333ee169bb" gracePeriod=2 Feb 19 10:02:00 crc kubenswrapper[4873]: I0219 10:02:00.052877 4873 generic.go:334] "Generic (PLEG): container finished" podID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerID="a571f36e40c7ad937de6c80c4fe2960d3a56ca51d705e8212a16e2333ee169bb" exitCode=0 Feb 19 10:02:00 crc kubenswrapper[4873]: I0219 10:02:00.052920 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerDied","Data":"a571f36e40c7ad937de6c80c4fe2960d3a56ca51d705e8212a16e2333ee169bb"} Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.464955 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f5jnw" Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.479175 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.495319 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfk5h" Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.498081 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.500399 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:02:01 crc kubenswrapper[4873]: I0219 10:02:01.520738 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w65h5" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620437 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts\") pod \"e1f97f25-d006-40d7-a090-ab45ab11b282\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620518 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts\") pod \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620558 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts\") pod \"fd7769ae-caf0-4f62-be96-90d6fa334259\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620604 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdc5\" (UniqueName: \"kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5\") pod \"fd7769ae-caf0-4f62-be96-90d6fa334259\" (UID: \"fd7769ae-caf0-4f62-be96-90d6fa334259\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620631 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts\") pod \"bf0daf0d-c150-49de-98af-3f65dd78112f\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620664 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts\") pod \"679f69ef-9960-4e33-a6aa-09baefabc417\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620735 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sht5z\" (UniqueName: \"kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z\") pod \"679f69ef-9960-4e33-a6aa-09baefabc417\" (UID: \"679f69ef-9960-4e33-a6aa-09baefabc417\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620786 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thfk7\" (UniqueName: \"kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7\") pod \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\" (UID: \"ec120760-bb10-44ff-bbb0-ed1665b4e17b\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620901 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts\") pod \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.620955 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4hbs\" (UniqueName: \"kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs\") pod \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\" (UID: \"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.621011 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6wv4\" (UniqueName: \"kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4\") pod \"e1f97f25-d006-40d7-a090-ab45ab11b282\" (UID: \"e1f97f25-d006-40d7-a090-ab45ab11b282\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.621052 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj5pl\" (UniqueName: \"kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl\") pod \"bf0daf0d-c150-49de-98af-3f65dd78112f\" (UID: \"bf0daf0d-c150-49de-98af-3f65dd78112f\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.621937 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f97f25-d006-40d7-a090-ab45ab11b282" (UID: "e1f97f25-d006-40d7-a090-ab45ab11b282"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.622448 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "679f69ef-9960-4e33-a6aa-09baefabc417" (UID: "679f69ef-9960-4e33-a6aa-09baefabc417"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.622866 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec120760-bb10-44ff-bbb0-ed1665b4e17b" (UID: "ec120760-bb10-44ff-bbb0-ed1665b4e17b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.623333 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd7769ae-caf0-4f62-be96-90d6fa334259" (UID: "fd7769ae-caf0-4f62-be96-90d6fa334259"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.624287 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f97f25-d006-40d7-a090-ab45ab11b282-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.624319 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec120760-bb10-44ff-bbb0-ed1665b4e17b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.624333 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7769ae-caf0-4f62-be96-90d6fa334259-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.624345 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679f69ef-9960-4e33-a6aa-09baefabc417-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.629972 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z" (OuterVolumeSpecName: "kube-api-access-sht5z") pod "679f69ef-9960-4e33-a6aa-09baefabc417" (UID: "679f69ef-9960-4e33-a6aa-09baefabc417"). InnerVolumeSpecName "kube-api-access-sht5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.631566 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf0daf0d-c150-49de-98af-3f65dd78112f" (UID: "bf0daf0d-c150-49de-98af-3f65dd78112f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.631723 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5" (OuterVolumeSpecName: "kube-api-access-5jdc5") pod "fd7769ae-caf0-4f62-be96-90d6fa334259" (UID: "fd7769ae-caf0-4f62-be96-90d6fa334259"). InnerVolumeSpecName "kube-api-access-5jdc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.632626 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" (UID: "94c35b26-7dc1-4cea-bbe7-53a9e47df7ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.633956 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7" (OuterVolumeSpecName: "kube-api-access-thfk7") pod "ec120760-bb10-44ff-bbb0-ed1665b4e17b" (UID: "ec120760-bb10-44ff-bbb0-ed1665b4e17b"). InnerVolumeSpecName "kube-api-access-thfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.636224 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4" (OuterVolumeSpecName: "kube-api-access-m6wv4") pod "e1f97f25-d006-40d7-a090-ab45ab11b282" (UID: "e1f97f25-d006-40d7-a090-ab45ab11b282"). InnerVolumeSpecName "kube-api-access-m6wv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.637760 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs" (OuterVolumeSpecName: "kube-api-access-j4hbs") pod "94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" (UID: "94c35b26-7dc1-4cea-bbe7-53a9e47df7ba"). InnerVolumeSpecName "kube-api-access-j4hbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.640906 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl" (OuterVolumeSpecName: "kube-api-access-tj5pl") pod "bf0daf0d-c150-49de-98af-3f65dd78112f" (UID: "bf0daf0d-c150-49de-98af-3f65dd78112f"). InnerVolumeSpecName "kube-api-access-tj5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726229 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj5pl\" (UniqueName: \"kubernetes.io/projected/bf0daf0d-c150-49de-98af-3f65dd78112f-kube-api-access-tj5pl\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726255 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdc5\" (UniqueName: \"kubernetes.io/projected/fd7769ae-caf0-4f62-be96-90d6fa334259-kube-api-access-5jdc5\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726266 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf0daf0d-c150-49de-98af-3f65dd78112f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726274 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sht5z\" (UniqueName: \"kubernetes.io/projected/679f69ef-9960-4e33-a6aa-09baefabc417-kube-api-access-sht5z\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726283 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thfk7\" (UniqueName: \"kubernetes.io/projected/ec120760-bb10-44ff-bbb0-ed1665b4e17b-kube-api-access-thfk7\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726291 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726300 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4hbs\" (UniqueName: \"kubernetes.io/projected/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba-kube-api-access-j4hbs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:01.726309 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6wv4\" (UniqueName: \"kubernetes.io/projected/e1f97f25-d006-40d7-a090-ab45ab11b282-kube-api-access-m6wv4\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.075772 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerID="5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52" exitCode=0 Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.075835 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerDied","Data":"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.078213 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfk5h" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.078202 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfk5h" event={"ID":"94c35b26-7dc1-4cea-bbe7-53a9e47df7ba","Type":"ContainerDied","Data":"6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.078339 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbd3a2c469215faf7d0f0cdd4481319d382cf3f96f79d5bbcd5880004e38acc" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.091152 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45f0-account-create-update-b4rvj" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.091183 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45f0-account-create-update-b4rvj" event={"ID":"bf0daf0d-c150-49de-98af-3f65dd78112f","Type":"ContainerDied","Data":"eb552ca98ac78dec98734f67b2cfc6dd764f57bca299edb1d904e9f18a03a5a9"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.091253 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb552ca98ac78dec98734f67b2cfc6dd764f57bca299edb1d904e9f18a03a5a9" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.092784 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73d0-account-create-update-vwj8q" event={"ID":"fd7769ae-caf0-4f62-be96-90d6fa334259","Type":"ContainerDied","Data":"37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.092829 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e3d42b2018f2672ea117a6adcb9cd16be3ac6b642d4d999186c536331b880a" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.092841 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73d0-account-create-update-vwj8q" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.094157 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-f5jnw" event={"ID":"ec120760-bb10-44ff-bbb0-ed1665b4e17b","Type":"ContainerDied","Data":"dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.094177 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dca669af7f7e92df448e624a6bcc1f9ab5d919831ea4f5a0d25f76e099c94782" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.094226 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-f5jnw" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.095508 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w65h5" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.095524 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w65h5" event={"ID":"e1f97f25-d006-40d7-a090-ab45ab11b282","Type":"ContainerDied","Data":"6ff99127cb67f2960d3ae8f41a09b021685c8e4644ccb65876e46a1f18c275f3"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.095602 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff99127cb67f2960d3ae8f41a09b021685c8e4644ccb65876e46a1f18c275f3" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.097244 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-63c5-account-create-update-rqftk" event={"ID":"679f69ef-9960-4e33-a6aa-09baefabc417","Type":"ContainerDied","Data":"8ba33e912f464ea0722df73f3baa819208f1910932806de73fe2ef0c44ba7498"} Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.097265 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba33e912f464ea0722df73f3baa819208f1910932806de73fe2ef0c44ba7498" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.097308 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-63c5-account-create-update-rqftk" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.277494 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.438429 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content\") pod \"d115a791-c703-4c6e-91e5-8f3ab9608277\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.438979 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities\") pod \"d115a791-c703-4c6e-91e5-8f3ab9608277\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.439147 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4z6\" (UniqueName: \"kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6\") pod \"d115a791-c703-4c6e-91e5-8f3ab9608277\" (UID: \"d115a791-c703-4c6e-91e5-8f3ab9608277\") " Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.440641 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities" (OuterVolumeSpecName: "utilities") pod "d115a791-c703-4c6e-91e5-8f3ab9608277" (UID: "d115a791-c703-4c6e-91e5-8f3ab9608277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.442685 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6" (OuterVolumeSpecName: "kube-api-access-pl4z6") pod "d115a791-c703-4c6e-91e5-8f3ab9608277" (UID: "d115a791-c703-4c6e-91e5-8f3ab9608277"). InnerVolumeSpecName "kube-api-access-pl4z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.530294 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.540781 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.540806 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl4z6\" (UniqueName: \"kubernetes.io/projected/d115a791-c703-4c6e-91e5-8f3ab9608277-kube-api-access-pl4z6\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.547126 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d115a791-c703-4c6e-91e5-8f3ab9608277" (UID: "d115a791-c703-4c6e-91e5-8f3ab9608277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.645782 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d115a791-c703-4c6e-91e5-8f3ab9608277-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.646942 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:02:02 crc kubenswrapper[4873]: I0219 10:02:02.647146 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="dnsmasq-dns" containerID="cri-o://9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97" gracePeriod=10 Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.064404 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.118263 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmkgp" event={"ID":"d115a791-c703-4c6e-91e5-8f3ab9608277","Type":"ContainerDied","Data":"164021a9f61ae1ea4080d3f61899481761a6227bd5066ab4023318b78119f680"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.118309 4873 scope.go:117] "RemoveContainer" containerID="a571f36e40c7ad937de6c80c4fe2960d3a56ca51d705e8212a16e2333ee169bb" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.118427 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmkgp" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.137820 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.137836 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" event={"ID":"710b77db-c69e-4428-93f6-7ce8b2c7ee17","Type":"ContainerDied","Data":"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.137770 4873 generic.go:334] "Generic (PLEG): container finished" podID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerID="9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97" exitCode=0 Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.138006 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f5d85f6c-9z7rp" event={"ID":"710b77db-c69e-4428-93f6-7ce8b2c7ee17","Type":"ContainerDied","Data":"d5213d8f776a516eb0ebc1bff77eadf707410bc2d3c6d133cd538660a60a385d"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.148735 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86n9s" event={"ID":"735c003d-082d-431f-9906-20c8946f1bf4","Type":"ContainerStarted","Data":"d5e694f492487d42cde9591670ae968a23c3523d631e5fc12b8c943bcbf3ca29"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.149794 4873 scope.go:117] "RemoveContainer" containerID="ddf2f2a59be91b05f4a24a87c32978d8341502e53c5702e5bb9180a41d34ff6f" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.154238 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrcpc\" (UniqueName: \"kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc\") pod \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.154296 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb\") pod \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.154318 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc\") pod \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.154341 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config\") pod \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.154418 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb\") pod \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\" (UID: \"710b77db-c69e-4428-93f6-7ce8b2c7ee17\") " Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.156661 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerStarted","Data":"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.158322 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k6j2h" event={"ID":"a075072a-1153-4963-91c7-e9e2aa08f988","Type":"ContainerStarted","Data":"f7fcab32e5de37523d8bdcbeaad1ae0eeef4c93525ac32d44d6c45730e393e7a"} Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.164983 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.177498 4873 scope.go:117] "RemoveContainer" containerID="c3c32c24ff9ddc9c878bf60c4e06dc7e24a6feab7886836d8ecf2510f7a2f602" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.179209 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmkgp"] Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.182331 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc" (OuterVolumeSpecName: "kube-api-access-qrcpc") pod "710b77db-c69e-4428-93f6-7ce8b2c7ee17" (UID: "710b77db-c69e-4428-93f6-7ce8b2c7ee17"). InnerVolumeSpecName "kube-api-access-qrcpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.184530 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-86n9s" podStartSLOduration=8.581058443 podStartE2EDuration="18.184498705s" podCreationTimestamp="2026-02-19 10:01:45 +0000 UTC" firstStartedPulling="2026-02-19 10:01:52.633198018 +0000 UTC m=+1021.922629666" lastFinishedPulling="2026-02-19 10:02:02.23663829 +0000 UTC m=+1031.526069928" observedRunningTime="2026-02-19 10:02:03.169523104 +0000 UTC m=+1032.458954742" watchObservedRunningTime="2026-02-19 10:02:03.184498705 +0000 UTC m=+1032.473930343" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.191885 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-k6j2h" podStartSLOduration=7.536493367 podStartE2EDuration="17.191868698s" podCreationTimestamp="2026-02-19 10:01:46 +0000 UTC" firstStartedPulling="2026-02-19 10:01:52.643208136 +0000 UTC m=+1021.932639764" lastFinishedPulling="2026-02-19 10:02:02.298583457 +0000 UTC m=+1031.588015095" observedRunningTime="2026-02-19 10:02:03.187945961 +0000 UTC m=+1032.477377599" watchObservedRunningTime="2026-02-19 10:02:03.191868698 +0000 UTC m=+1032.481300336" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.200842 4873 scope.go:117] "RemoveContainer" containerID="9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.211955 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "710b77db-c69e-4428-93f6-7ce8b2c7ee17" (UID: "710b77db-c69e-4428-93f6-7ce8b2c7ee17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.216341 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "710b77db-c69e-4428-93f6-7ce8b2c7ee17" (UID: "710b77db-c69e-4428-93f6-7ce8b2c7ee17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.223331 4873 scope.go:117] "RemoveContainer" containerID="84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.226064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "710b77db-c69e-4428-93f6-7ce8b2c7ee17" (UID: "710b77db-c69e-4428-93f6-7ce8b2c7ee17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.237715 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config" (OuterVolumeSpecName: "config") pod "710b77db-c69e-4428-93f6-7ce8b2c7ee17" (UID: "710b77db-c69e-4428-93f6-7ce8b2c7ee17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.245872 4873 scope.go:117] "RemoveContainer" containerID="9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97" Feb 19 10:02:03 crc kubenswrapper[4873]: E0219 10:02:03.246495 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97\": container with ID starting with 9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97 not found: ID does not exist" containerID="9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.246584 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97"} err="failed to get container status \"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97\": rpc error: code = NotFound desc = could not find container \"9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97\": container with ID starting with 9b82833ea4c40ca9217967e3dd7ede76510477fd6b291d755ae79e49cbe06c97 not found: ID does not exist" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.246618 4873 scope.go:117] "RemoveContainer" containerID="84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4" Feb 19 10:02:03 crc kubenswrapper[4873]: E0219 10:02:03.246926 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4\": container with ID starting with 84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4 not found: ID does not exist" containerID="84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.246962 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4"} err="failed to get container status \"84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4\": rpc error: code = NotFound desc = could not find container \"84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4\": container with ID starting with 84d6805fa8e8d8d98a61caa72fabcd292a85d006dfab20b6192c2c1765a01aa4 not found: ID does not exist" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.256528 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrcpc\" (UniqueName: \"kubernetes.io/projected/710b77db-c69e-4428-93f6-7ce8b2c7ee17-kube-api-access-qrcpc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.256558 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.256569 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.256578 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.256588 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/710b77db-c69e-4428-93f6-7ce8b2c7ee17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.472329 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.482203 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f5d85f6c-9z7rp"] Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.514213 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" path="/var/lib/kubelet/pods/710b77db-c69e-4428-93f6-7ce8b2c7ee17/volumes" Feb 19 10:02:03 crc kubenswrapper[4873]: I0219 10:02:03.514789 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" path="/var/lib/kubelet/pods/d115a791-c703-4c6e-91e5-8f3ab9608277/volumes" Feb 19 10:02:06 crc kubenswrapper[4873]: I0219 10:02:06.188178 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerStarted","Data":"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573"} Feb 19 10:02:06 crc kubenswrapper[4873]: I0219 10:02:06.188505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerStarted","Data":"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980"} Feb 19 10:02:06 crc kubenswrapper[4873]: I0219 10:02:06.216711 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.216693141 podStartE2EDuration="25.216693141s" podCreationTimestamp="2026-02-19 10:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:06.214327432 +0000 UTC m=+1035.503759080" watchObservedRunningTime="2026-02-19 10:02:06.216693141 +0000 UTC m=+1035.506124789" Feb 19 10:02:07 crc kubenswrapper[4873]: I0219 10:02:07.199075 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9472r" event={"ID":"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba","Type":"ContainerStarted","Data":"e5eec3e87329724888651bc35b53713711df95ef48801142ac4dd2488284d91d"} Feb 19 10:02:07 crc kubenswrapper[4873]: I0219 10:02:07.201990 4873 generic.go:334] "Generic (PLEG): container finished" podID="a075072a-1153-4963-91c7-e9e2aa08f988" containerID="f7fcab32e5de37523d8bdcbeaad1ae0eeef4c93525ac32d44d6c45730e393e7a" exitCode=0 Feb 19 10:02:07 crc kubenswrapper[4873]: I0219 10:02:07.202097 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k6j2h" event={"ID":"a075072a-1153-4963-91c7-e9e2aa08f988","Type":"ContainerDied","Data":"f7fcab32e5de37523d8bdcbeaad1ae0eeef4c93525ac32d44d6c45730e393e7a"} Feb 19 10:02:07 crc kubenswrapper[4873]: I0219 10:02:07.226045 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9472r" podStartSLOduration=3.616256544 podStartE2EDuration="35.226024212s" podCreationTimestamp="2026-02-19 10:01:32 +0000 UTC" firstStartedPulling="2026-02-19 10:01:33.970057364 +0000 UTC m=+1003.259489002" lastFinishedPulling="2026-02-19 10:02:05.579825032 +0000 UTC m=+1034.869256670" observedRunningTime="2026-02-19 10:02:07.21668883 +0000 UTC m=+1036.506120478" watchObservedRunningTime="2026-02-19 10:02:07.226024212 +0000 UTC m=+1036.515455860" Feb 19 10:02:07 crc kubenswrapper[4873]: I0219 10:02:07.494799 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.211760 4873 generic.go:334] "Generic (PLEG): container finished" podID="735c003d-082d-431f-9906-20c8946f1bf4" containerID="d5e694f492487d42cde9591670ae968a23c3523d631e5fc12b8c943bcbf3ca29" exitCode=0 Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.212311 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86n9s" event={"ID":"735c003d-082d-431f-9906-20c8946f1bf4","Type":"ContainerDied","Data":"d5e694f492487d42cde9591670ae968a23c3523d631e5fc12b8c943bcbf3ca29"} Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.478918 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.550771 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle\") pod \"a075072a-1153-4963-91c7-e9e2aa08f988\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.550866 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66wv\" (UniqueName: \"kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv\") pod \"a075072a-1153-4963-91c7-e9e2aa08f988\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.550902 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data\") pod \"a075072a-1153-4963-91c7-e9e2aa08f988\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.550981 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data\") pod \"a075072a-1153-4963-91c7-e9e2aa08f988\" (UID: \"a075072a-1153-4963-91c7-e9e2aa08f988\") " Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.563767 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a075072a-1153-4963-91c7-e9e2aa08f988" (UID: "a075072a-1153-4963-91c7-e9e2aa08f988"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.563767 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv" (OuterVolumeSpecName: "kube-api-access-t66wv") pod "a075072a-1153-4963-91c7-e9e2aa08f988" (UID: "a075072a-1153-4963-91c7-e9e2aa08f988"). InnerVolumeSpecName "kube-api-access-t66wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.589028 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a075072a-1153-4963-91c7-e9e2aa08f988" (UID: "a075072a-1153-4963-91c7-e9e2aa08f988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.597816 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data" (OuterVolumeSpecName: "config-data") pod "a075072a-1153-4963-91c7-e9e2aa08f988" (UID: "a075072a-1153-4963-91c7-e9e2aa08f988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.654661 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t66wv\" (UniqueName: \"kubernetes.io/projected/a075072a-1153-4963-91c7-e9e2aa08f988-kube-api-access-t66wv\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.654695 4873 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.654705 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:08 crc kubenswrapper[4873]: I0219 10:02:08.654713 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a075072a-1153-4963-91c7-e9e2aa08f988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.223409 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k6j2h" event={"ID":"a075072a-1153-4963-91c7-e9e2aa08f988","Type":"ContainerDied","Data":"5743e2884f1a9a199fc04d17b81aca13969dd3eeccc42a1ad3851609621f80db"} Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.223455 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5743e2884f1a9a199fc04d17b81aca13969dd3eeccc42a1ad3851609621f80db" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.224257 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k6j2h" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.547161 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86n9s" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.671804 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle\") pod \"735c003d-082d-431f-9906-20c8946f1bf4\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.671875 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4qhl\" (UniqueName: \"kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl\") pod \"735c003d-082d-431f-9906-20c8946f1bf4\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.671917 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data\") pod \"735c003d-082d-431f-9906-20c8946f1bf4\" (UID: \"735c003d-082d-431f-9906-20c8946f1bf4\") " Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.693373 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl" (OuterVolumeSpecName: "kube-api-access-p4qhl") pod "735c003d-082d-431f-9906-20c8946f1bf4" (UID: "735c003d-082d-431f-9906-20c8946f1bf4"). InnerVolumeSpecName "kube-api-access-p4qhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.698367 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735c003d-082d-431f-9906-20c8946f1bf4" (UID: "735c003d-082d-431f-9906-20c8946f1bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.721563 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data" (OuterVolumeSpecName: "config-data") pod "735c003d-082d-431f-9906-20c8946f1bf4" (UID: "735c003d-082d-431f-9906-20c8946f1bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.773891 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.773940 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4qhl\" (UniqueName: \"kubernetes.io/projected/735c003d-082d-431f-9906-20c8946f1bf4-kube-api-access-p4qhl\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4873]: I0219 10:02:09.773954 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735c003d-082d-431f-9906-20c8946f1bf4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.233195 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-86n9s" event={"ID":"735c003d-082d-431f-9906-20c8946f1bf4","Type":"ContainerDied","Data":"e7a7027c2775e72cf8a30257c067cda81d18ed0d292fcd29a26190fc119c919e"} Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.233233 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-86n9s" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.233235 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a7027c2775e72cf8a30257c067cda81d18ed0d292fcd29a26190fc119c919e" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500509 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500871 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a075072a-1153-4963-91c7-e9e2aa08f988" containerName="watcher-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500888 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a075072a-1153-4963-91c7-e9e2aa08f988" containerName="watcher-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500906 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f97f25-d006-40d7-a090-ab45ab11b282" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500913 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f97f25-d006-40d7-a090-ab45ab11b282" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500921 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="init" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500927 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="init" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500938 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="dnsmasq-dns" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500944 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="dnsmasq-dns" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500952 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735c003d-082d-431f-9906-20c8946f1bf4" containerName="keystone-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500957 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="735c003d-082d-431f-9906-20c8946f1bf4" containerName="keystone-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500965 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679f69ef-9960-4e33-a6aa-09baefabc417" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500971 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="679f69ef-9960-4e33-a6aa-09baefabc417" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.500980 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec120760-bb10-44ff-bbb0-ed1665b4e17b" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.500987 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec120760-bb10-44ff-bbb0-ed1665b4e17b" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501002 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501009 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501017 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="extract-content" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501023 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="extract-content" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501030 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501036 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501050 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="extract-utilities" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501056 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="extract-utilities" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501067 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0daf0d-c150-49de-98af-3f65dd78112f" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501073 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0daf0d-c150-49de-98af-3f65dd78112f" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: E0219 10:02:10.501083 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7769ae-caf0-4f62-be96-90d6fa334259" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501089 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7769ae-caf0-4f62-be96-90d6fa334259" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501639 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0daf0d-c150-49de-98af-3f65dd78112f" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501667 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d115a791-c703-4c6e-91e5-8f3ab9608277" containerName="registry-server" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501684 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501708 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f97f25-d006-40d7-a090-ab45ab11b282" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501722 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="679f69ef-9960-4e33-a6aa-09baefabc417" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501739 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7769ae-caf0-4f62-be96-90d6fa334259" containerName="mariadb-account-create-update" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501754 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="735c003d-082d-431f-9906-20c8946f1bf4" containerName="keystone-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501772 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a075072a-1153-4963-91c7-e9e2aa08f988" containerName="watcher-db-sync" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501785 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec120760-bb10-44ff-bbb0-ed1665b4e17b" containerName="mariadb-database-create" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.501799 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="710b77db-c69e-4428-93f6-7ce8b2c7ee17" containerName="dnsmasq-dns" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.502740 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.522085 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.553790 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nf742"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.554981 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.563282 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.563962 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-27d74" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.566583 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.566620 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.568146 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nf742"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.568501 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596377 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596477 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596556 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tm6v\" (UniqueName: \"kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596592 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596638 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.596664 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.658036 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.659213 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.670159 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.674539 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-m755p" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.680174 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.697976 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698019 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698051 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698091 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698126 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698163 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698182 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698197 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698224 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698256 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tm6v\" (UniqueName: \"kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698280 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.698300 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphnf\" (UniqueName: \"kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.700059 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.700456 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.701030 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.701041 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.701484 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.701545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.702615 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.703843 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.704657 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.705714 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.708682 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.748156 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tm6v\" (UniqueName: \"kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v\") pod \"dnsmasq-dns-5c85f9c585-fhvwm\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.754168 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.790599 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802179 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802227 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b9v6\" (UniqueName: \"kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802251 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802272 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802296 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802327 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802397 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802424 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf92r\" (UniqueName: \"kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802465 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802510 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802553 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxrn\" (UniqueName: \"kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802581 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802600 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sphnf\" (UniqueName: \"kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802654 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802707 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802762 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802808 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802855 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.802870 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.806815 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.821613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.821699 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.822945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.824509 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.827472 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.838978 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.839613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sphnf\" (UniqueName: \"kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.843281 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts\") pod \"keystone-bootstrap-nf742\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.843897 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.844634 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.844841 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-l5tm9" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.845303 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.899012 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913718 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913793 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913820 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf92r\" (UniqueName: \"kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913930 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxrn\" (UniqueName: \"kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913955 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.913991 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914028 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914064 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914082 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914218 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914238 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdfl7\" (UniqueName: \"kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914264 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914280 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914313 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914332 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914358 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b9v6\" (UniqueName: \"kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914373 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914531 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.914762 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.927767 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.941545 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.942000 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.944522 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.949933 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.950474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.953125 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.969231 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.969697 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.969733 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gqrb5"] Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.970814 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.974853 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.975079 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tmcc9" Feb 19 10:02:10 crc kubenswrapper[4873]: I0219 10:02:10.978477 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:10.997299 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gqrb5"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:10.997630 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b9v6\" (UniqueName: \"kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6\") pod \"watcher-applier-0\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " pod="openstack/watcher-applier-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.010527 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.010799 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf92r\" (UniqueName: \"kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r\") pod \"watcher-decision-engine-0\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.015845 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxrn\" (UniqueName: \"kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn\") pod \"watcher-api-0\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " pod="openstack/watcher-api-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.016845 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.016885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.016913 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdfl7\" (UniqueName: \"kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.016949 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.016974 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.025427 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.026034 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.026786 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.030012 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vf762"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.030457 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.031470 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.039984 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.053039 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.054055 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pk4jm" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.054335 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.054900 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.082544 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdfl7\" (UniqueName: \"kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7\") pod \"horizon-98c8c74bf-wsl5f\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.095418 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vf762"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118274 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118597 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118632 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118667 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118777 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118805 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48k6w\" (UniqueName: \"kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118838 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118860 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.118906 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ttv\" (UniqueName: \"kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.133266 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4pv5z"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.134368 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.141146 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4pv5z"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.141472 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.141979 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t72rv" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.149192 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.181382 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.182912 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.220406 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222602 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222683 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222714 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222754 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222814 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222931 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78b5v\" (UniqueName: \"kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.222961 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.223011 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48k6w\" (UniqueName: \"kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.223046 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.223067 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.223486 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ttv\" (UniqueName: \"kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.244662 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.256424 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.256898 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.262318 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ttv\" (UniqueName: \"kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.264732 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.265660 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48k6w\" (UniqueName: \"kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.269919 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config\") pod \"neutron-db-sync-vf762\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.278902 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf762" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.290205 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.290836 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.302294 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data\") pod \"cinder-db-sync-gqrb5\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.335501 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.357498 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407176 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407244 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407366 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78b5v\" (UniqueName: \"kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407444 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchwr\" (UniqueName: \"kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407484 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407530 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407687 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.407712 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.426250 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.434763 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.441752 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.467999 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78b5v\" (UniqueName: \"kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v\") pod \"barbican-db-sync-4pv5z\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.514805 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515134 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515237 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515312 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515478 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515556 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7h7\" (UniqueName: \"kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515662 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchwr\" (UniqueName: \"kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515804 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.515881 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.516916 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.517588 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.518924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.519564 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.521401 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.559377 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.559755 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-98gbw"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.563895 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.564460 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-98gbw"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.564574 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.565470 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchwr\" (UniqueName: \"kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr\") pod \"dnsmasq-dns-99d6b5b4f-2j7fk\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.577456 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.581146 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.583363 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-x7shj" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.617568 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.622598 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.628839 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.629316 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.629473 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7h7\" (UniqueName: \"kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.637204 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.638473 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.640303 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.652138 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.657014 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.664809 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.670846 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.676856 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.677097 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.677714 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7h7\" (UniqueName: \"kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7\") pod \"horizon-79f476f4fc-dsgbh\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.704826 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.740452 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.741395 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.742080 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.743051 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.743692 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9t4k\" (UniqueName: \"kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.760437 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.790745 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.846272 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nf742"] Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849192 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849268 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849286 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849308 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849342 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849373 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849400 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849428 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849490 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9t4k\" (UniqueName: \"kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849548 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849606 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.849627 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxqg\" (UniqueName: \"kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.862687 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.864260 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.882425 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.890772 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.893516 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9t4k\" (UniqueName: \"kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k\") pod \"placement-db-sync-98gbw\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958205 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxqg\" (UniqueName: \"kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958263 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958303 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958323 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958357 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958390 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.958422 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.959569 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.959797 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.967173 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.974146 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.977174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.979072 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.989462 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:11 crc kubenswrapper[4873]: I0219 10:02:11.993918 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxqg\" (UniqueName: \"kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg\") pod \"ceilometer-0\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " pod="openstack/ceilometer-0" Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.000519 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.026684 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.287697 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nf742" event={"ID":"f631ba50-5961-428e-83a5-a8ddb50085d3","Type":"ContainerStarted","Data":"f5279a2fd1b4d18b8c04c2e7d237f62d2fc966d80132b8b737d8df303a78f856"} Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.301636 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" event={"ID":"90450452-854d-4886-b5ce-828f85c3f721","Type":"ContainerStarted","Data":"d4676ca17a3f8486e0adc6dd7b1d20b3b030b6852dc5274a36cbf6268c9cc2f3"} Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.420573 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vf762"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.472638 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.491485 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:12 crc kubenswrapper[4873]: W0219 10:02:12.495443 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda575d51d_1ad3_422e_8e7c_b24b2c5de526.slice/crio-581b3a000070ae451f6e9ef110e53c2e98989bd07a9fba9a09c69f4b1ecfba88 WatchSource:0}: Error finding container 581b3a000070ae451f6e9ef110e53c2e98989bd07a9fba9a09c69f4b1ecfba88: Status 404 returned error can't find the container with id 581b3a000070ae451f6e9ef110e53c2e98989bd07a9fba9a09c69f4b1ecfba88 Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.502730 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.720092 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gqrb5"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.731269 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.741562 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:12 crc kubenswrapper[4873]: W0219 10:02:12.750341 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc639af02_a4c8_40cf_947e_a50353ab2537.slice/crio-9eac663eb58aa13e3523f32d4bcb37aa001e4ee953f53b136a077545dfbf1008 WatchSource:0}: Error finding container 9eac663eb58aa13e3523f32d4bcb37aa001e4ee953f53b136a077545dfbf1008: Status 404 returned error can't find the container with id 9eac663eb58aa13e3523f32d4bcb37aa001e4ee953f53b136a077545dfbf1008 Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.775016 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.968220 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4pv5z"] Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.977427 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:12 crc kubenswrapper[4873]: W0219 10:02:12.983687 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402372ed_3c0d_4d12_a4f5_bbd82024a08d.slice/crio-676e73049674f4406ff08bc00e5c61a6ce15ece9c685ed8a76d9fda336863789 WatchSource:0}: Error finding container 676e73049674f4406ff08bc00e5c61a6ce15ece9c685ed8a76d9fda336863789: Status 404 returned error can't find the container with id 676e73049674f4406ff08bc00e5c61a6ce15ece9c685ed8a76d9fda336863789 Feb 19 10:02:12 crc kubenswrapper[4873]: I0219 10:02:12.984746 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.229592 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-98gbw"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.259661 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.329422 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gqrb5" event={"ID":"ce5accb4-1da0-4a21-a289-7dba33ad935f","Type":"ContainerStarted","Data":"7091928b68df42cd9ae5c284cfdb9622dc758710a4af850abe1bece12bfc74a3"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.332621 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerStarted","Data":"0782a10551b13dd61abc8e02874c936098093870d1d93858b972201b9cd3c7da"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.332875 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerStarted","Data":"e625952875a51eb5caf68d1f4611160ae7316ded4cd569ca37c367b0a5f6884b"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.332891 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerStarted","Data":"581b3a000070ae451f6e9ef110e53c2e98989bd07a9fba9a09c69f4b1ecfba88"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.334566 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.340213 4873 generic.go:334] "Generic (PLEG): container finished" podID="90450452-854d-4886-b5ce-828f85c3f721" containerID="da13141d440c09b143067199911d8baeb575411d93028b4a5a0e7f29369df1da" exitCode=0 Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.340307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" event={"ID":"90450452-854d-4886-b5ce-828f85c3f721","Type":"ContainerDied","Data":"da13141d440c09b143067199911d8baeb575411d93028b4a5a0e7f29369df1da"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.344210 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": dial tcp 10.217.0.151:9322: connect: connection refused" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.355908 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.355889357 podStartE2EDuration="3.355889357s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:13.353777434 +0000 UTC m=+1042.643209072" watchObservedRunningTime="2026-02-19 10:02:13.355889357 +0000 UTC m=+1042.645321015" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.357595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" event={"ID":"402372ed-3c0d-4d12-a4f5-bbd82024a08d","Type":"ContainerStarted","Data":"676e73049674f4406ff08bc00e5c61a6ce15ece9c685ed8a76d9fda336863789"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.372585 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerStarted","Data":"c341b58fa66a9c7c1455f8e33fdfb22dd5f6b0a9b06cdb661264c78977069ea2"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.383471 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-98c8c74bf-wsl5f" event={"ID":"c639af02-a4c8-40cf-947e-a50353ab2537","Type":"ContainerStarted","Data":"9eac663eb58aa13e3523f32d4bcb37aa001e4ee953f53b136a077545dfbf1008"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.393499 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95402218-fbbb-4453-aba6-d135ba3a26bd","Type":"ContainerStarted","Data":"db54f1f84baadc90d4260a330bf2f720ca4cd24fbaeefa2f3e5f18f033d41844"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.406045 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98gbw" event={"ID":"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6","Type":"ContainerStarted","Data":"37b0ac3d48e8bec4044f6f8f22d9abb8c79cc58cedf7d1bbf6b0fb89fcc2a84a"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.429009 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b8008736-31ec-491c-aa52-03b9413feab9","Type":"ContainerStarted","Data":"a2ea00441668ccb8b861c9df038d7b7675d321f71184b1fb21464b11d16d7eef"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.431370 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79f476f4fc-dsgbh" event={"ID":"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d","Type":"ContainerStarted","Data":"60cadfc4a70dc8e7874b849ac63b6a6b2ae5cbe2b77781abafab78ab09e65314"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.433815 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4pv5z" event={"ID":"943d069e-6ad4-4411-b937-c4499f0ced6f","Type":"ContainerStarted","Data":"db07d7286194b278e2cec929f66edc47c3ebbe39668738c5d112ac9e99a6a103"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.447887 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf762" event={"ID":"99868e3f-82d7-4f0c-9056-661e95486e6e","Type":"ContainerStarted","Data":"22b91ea45d57e3f8ed16da3e5a4058c15af39a6d914075c4521ba6755b03990b"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.447934 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf762" event={"ID":"99868e3f-82d7-4f0c-9056-661e95486e6e","Type":"ContainerStarted","Data":"51130049c4f72c45b52b368bcf10130af9e763c98f2e5fc842a0ae20064148f7"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.451309 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nf742" event={"ID":"f631ba50-5961-428e-83a5-a8ddb50085d3","Type":"ContainerStarted","Data":"dfe8a7cf5aeabc3bd0899d011af3258a91fc6d795682d82ec9f1f9c569448452"} Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.459331 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.465296 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vf762" podStartSLOduration=3.465280451 podStartE2EDuration="3.465280451s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:13.464976033 +0000 UTC m=+1042.754407671" watchObservedRunningTime="2026-02-19 10:02:13.465280451 +0000 UTC m=+1042.754712089" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.549915 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nf742" podStartSLOduration=3.54989385 podStartE2EDuration="3.54989385s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:13.542474946 +0000 UTC m=+1042.831906574" watchObservedRunningTime="2026-02-19 10:02:13.54989385 +0000 UTC m=+1042.839325488" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.762914 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.834939 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.891909 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.949148 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.951212 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:13 crc kubenswrapper[4873]: E0219 10:02:13.951517 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90450452-854d-4886-b5ce-828f85c3f721" containerName="init" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.951532 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="90450452-854d-4886-b5ce-828f85c3f721" containerName="init" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.951718 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="90450452-854d-4886-b5ce-828f85c3f721" containerName="init" Feb 19 10:02:13 crc kubenswrapper[4873]: I0219 10:02:13.952589 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.049745 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082635 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082685 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082704 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082838 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082856 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.082912 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tm6v\" (UniqueName: \"kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v\") pod \"90450452-854d-4886-b5ce-828f85c3f721\" (UID: \"90450452-854d-4886-b5ce-828f85c3f721\") " Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.083116 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfmw\" (UniqueName: \"kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.083152 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.083186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.083204 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.083232 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.111256 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v" (OuterVolumeSpecName: "kube-api-access-2tm6v") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "kube-api-access-2tm6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.118861 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.136911 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config" (OuterVolumeSpecName: "config") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186383 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186434 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186481 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186611 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfmw\" (UniqueName: \"kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186717 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186731 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.186743 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tm6v\" (UniqueName: \"kubernetes.io/projected/90450452-854d-4886-b5ce-828f85c3f721-kube-api-access-2tm6v\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.187454 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.188451 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.188819 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.204837 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.205170 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.254093 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.268762 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfmw\" (UniqueName: \"kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw\") pod \"horizon-76bfd776d9-fdg7f\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.269451 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90450452-854d-4886-b5ce-828f85c3f721" (UID: "90450452-854d-4886-b5ce-828f85c3f721"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.291821 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.292793 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.292818 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.292827 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90450452-854d-4886-b5ce-828f85c3f721-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.476142 4873 generic.go:334] "Generic (PLEG): container finished" podID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerID="0caa3e8656105d67ae98953c7c54ce1e536f9e27f2d0305163026fbf53ca79e0" exitCode=0 Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.477328 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" event={"ID":"402372ed-3c0d-4d12-a4f5-bbd82024a08d","Type":"ContainerDied","Data":"0caa3e8656105d67ae98953c7c54ce1e536f9e27f2d0305163026fbf53ca79e0"} Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.481907 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" event={"ID":"90450452-854d-4886-b5ce-828f85c3f721","Type":"ContainerDied","Data":"d4676ca17a3f8486e0adc6dd7b1d20b3b030b6852dc5274a36cbf6268c9cc2f3"} Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.481940 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c85f9c585-fhvwm" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.481951 4873 scope.go:117] "RemoveContainer" containerID="da13141d440c09b143067199911d8baeb575411d93028b4a5a0e7f29369df1da" Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.662623 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.670572 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c85f9c585-fhvwm"] Feb 19 10:02:14 crc kubenswrapper[4873]: I0219 10:02:14.900001 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:14 crc kubenswrapper[4873]: W0219 10:02:14.912943 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6827937b_ebcc_45a6_98e3_08d49115503b.slice/crio-3316c81d04a3ec6d98aa8cad078c0b0a6499f0b69e6af433f3ff3dc9ecbf7c2c WatchSource:0}: Error finding container 3316c81d04a3ec6d98aa8cad078c0b0a6499f0b69e6af433f3ff3dc9ecbf7c2c: Status 404 returned error can't find the container with id 3316c81d04a3ec6d98aa8cad078c0b0a6499f0b69e6af433f3ff3dc9ecbf7c2c Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.511020 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" containerID="cri-o://0782a10551b13dd61abc8e02874c936098093870d1d93858b972201b9cd3c7da" gracePeriod=30 Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.511008 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api-log" containerID="cri-o://e625952875a51eb5caf68d1f4611160ae7316ded4cd569ca37c367b0a5f6884b" gracePeriod=30 Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.513091 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90450452-854d-4886-b5ce-828f85c3f721" path="/var/lib/kubelet/pods/90450452-854d-4886-b5ce-828f85c3f721/volumes" Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.514061 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.514095 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" event={"ID":"402372ed-3c0d-4d12-a4f5-bbd82024a08d","Type":"ContainerStarted","Data":"7a12463c2cf197b1f920440df50985d94ae3e7a22c56ad882d01bc741d80d703"} Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.514130 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bfd776d9-fdg7f" event={"ID":"6827937b-ebcc-45a6-98e3-08d49115503b","Type":"ContainerStarted","Data":"3316c81d04a3ec6d98aa8cad078c0b0a6499f0b69e6af433f3ff3dc9ecbf7c2c"} Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.534839 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" podStartSLOduration=4.534817605 podStartE2EDuration="4.534817605s" podCreationTimestamp="2026-02-19 10:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:15.530079897 +0000 UTC m=+1044.819511545" watchObservedRunningTime="2026-02-19 10:02:15.534817605 +0000 UTC m=+1044.824249243" Feb 19 10:02:15 crc kubenswrapper[4873]: I0219 10:02:15.572395 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": EOF" Feb 19 10:02:16 crc kubenswrapper[4873]: I0219 10:02:16.054543 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:02:16 crc kubenswrapper[4873]: I0219 10:02:16.528686 4873 generic.go:334] "Generic (PLEG): container finished" podID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerID="e625952875a51eb5caf68d1f4611160ae7316ded4cd569ca37c367b0a5f6884b" exitCode=143 Feb 19 10:02:16 crc kubenswrapper[4873]: I0219 10:02:16.528784 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerDied","Data":"e625952875a51eb5caf68d1f4611160ae7316ded4cd569ca37c367b0a5f6884b"} Feb 19 10:02:18 crc kubenswrapper[4873]: I0219 10:02:18.334018 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": read tcp 10.217.0.2:59434->10.217.0.151:9322: read: connection reset by peer" Feb 19 10:02:18 crc kubenswrapper[4873]: I0219 10:02:18.334965 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": dial tcp 10.217.0.151:9322: connect: connection refused" Feb 19 10:02:19 crc kubenswrapper[4873]: I0219 10:02:19.582996 4873 generic.go:334] "Generic (PLEG): container finished" podID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerID="0782a10551b13dd61abc8e02874c936098093870d1d93858b972201b9cd3c7da" exitCode=0 Feb 19 10:02:19 crc kubenswrapper[4873]: I0219 10:02:19.583075 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerDied","Data":"0782a10551b13dd61abc8e02874c936098093870d1d93858b972201b9cd3c7da"} Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.488602 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.512492 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.513948 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.515384 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.533499 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.595943 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.608910 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6687d9896d-v96j2"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.620004 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6687d9896d-v96j2"] Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.620187 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679486 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679555 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679621 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679649 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679691 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt75m\" (UniqueName: \"kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679709 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.679725 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.781011 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7929\" (UniqueName: \"kubernetes.io/projected/fa527f64-6e38-48c2-9927-a319f4579070-kube-api-access-f7929\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.781078 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.781140 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-combined-ca-bundle\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.781180 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783240 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa527f64-6e38-48c2-9927-a319f4579070-logs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783401 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783568 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-tls-certs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783622 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt75m\" (UniqueName: \"kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783673 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-scripts\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783704 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783776 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-secret-key\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.783859 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-config-data\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.784607 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.784900 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.785282 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.790507 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.791587 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.806240 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.815622 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt75m\" (UniqueName: \"kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m\") pod \"horizon-87df9b646-2jf26\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.841451 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887170 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-scripts\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887233 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-secret-key\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887281 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-config-data\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887364 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7929\" (UniqueName: \"kubernetes.io/projected/fa527f64-6e38-48c2-9927-a319f4579070-kube-api-access-f7929\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-combined-ca-bundle\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887504 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa527f64-6e38-48c2-9927-a319f4579070-logs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.887560 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-tls-certs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.889866 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-scripts\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.890158 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa527f64-6e38-48c2-9927-a319f4579070-logs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.891924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa527f64-6e38-48c2-9927-a319f4579070-config-data\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.894745 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-combined-ca-bundle\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.895280 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-secret-key\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.901565 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa527f64-6e38-48c2-9927-a319f4579070-horizon-tls-certs\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.908888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7929\" (UniqueName: \"kubernetes.io/projected/fa527f64-6e38-48c2-9927-a319f4579070-kube-api-access-f7929\") pod \"horizon-6687d9896d-v96j2\" (UID: \"fa527f64-6e38-48c2-9927-a319f4579070\") " pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:20 crc kubenswrapper[4873]: I0219 10:02:20.947812 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:21 crc kubenswrapper[4873]: I0219 10:02:21.617828 4873 generic.go:334] "Generic (PLEG): container finished" podID="f631ba50-5961-428e-83a5-a8ddb50085d3" containerID="dfe8a7cf5aeabc3bd0899d011af3258a91fc6d795682d82ec9f1f9c569448452" exitCode=0 Feb 19 10:02:21 crc kubenswrapper[4873]: I0219 10:02:21.617866 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nf742" event={"ID":"f631ba50-5961-428e-83a5-a8ddb50085d3","Type":"ContainerDied","Data":"dfe8a7cf5aeabc3bd0899d011af3258a91fc6d795682d82ec9f1f9c569448452"} Feb 19 10:02:21 crc kubenswrapper[4873]: I0219 10:02:21.762268 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:21 crc kubenswrapper[4873]: I0219 10:02:21.838635 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:02:21 crc kubenswrapper[4873]: I0219 10:02:21.838866 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" containerID="cri-o://3202df88b506237a1560baea9fc86854fa472069f50ad6c6f94a7855eaa6ff1a" gracePeriod=10 Feb 19 10:02:22 crc kubenswrapper[4873]: I0219 10:02:22.528353 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 19 10:02:22 crc kubenswrapper[4873]: I0219 10:02:22.634344 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerID="3202df88b506237a1560baea9fc86854fa472069f50ad6c6f94a7855eaa6ff1a" exitCode=0 Feb 19 10:02:22 crc kubenswrapper[4873]: I0219 10:02:22.634423 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" event={"ID":"8e8c0292-715e-4d4d-a552-5229adfc3e74","Type":"ContainerDied","Data":"3202df88b506237a1560baea9fc86854fa472069f50ad6c6f94a7855eaa6ff1a"} Feb 19 10:02:24 crc kubenswrapper[4873]: I0219 10:02:24.655950 4873 generic.go:334] "Generic (PLEG): container finished" podID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" containerID="e5eec3e87329724888651bc35b53713711df95ef48801142ac4dd2488284d91d" exitCode=0 Feb 19 10:02:24 crc kubenswrapper[4873]: I0219 10:02:24.656071 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9472r" event={"ID":"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba","Type":"ContainerDied","Data":"e5eec3e87329724888651bc35b53713711df95ef48801142ac4dd2488284d91d"} Feb 19 10:02:26 crc kubenswrapper[4873]: I0219 10:02:26.054501 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:02:27 crc kubenswrapper[4873]: I0219 10:02:27.528692 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 19 10:02:30 crc kubenswrapper[4873]: E0219 10:02:30.184301 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:30 crc kubenswrapper[4873]: E0219 10:02:30.184730 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:30 crc kubenswrapper[4873]: E0219 10:02:30.184907 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n675h65ch64ch558h87h697h55bhf6h59dhddh88hfh546hcch5c6h66ch56chdch547h5cbh59fh667hb5h5c7h7ch8ch6ch55bhfch89hf8h67q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf7h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-79f476f4fc-dsgbh_openstack(dca31fe9-df4d-4734-afcd-b0ebf4a54e4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:30 crc kubenswrapper[4873]: E0219 10:02:30.187389 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-79f476f4fc-dsgbh" podUID="dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" Feb 19 10:02:31 crc kubenswrapper[4873]: I0219 10:02:31.055880 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.489227 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.489292 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.489465 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.20:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9t4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-98gbw_openstack(ec5489a2-23e2-4875-a19b-d15b4ad6c8c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.490688 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-98gbw" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.736134 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-98gbw" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.801025 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.801562 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.801700 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.20:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h68ch56fh5dh5cch696h75h8ch556h67dh676h58bh547h579h9ch6h6ch655h67ch7ch544h654h5f7h686h9h66dh559h79h5d7h8dh75hc6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xxqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ab448dfd-a67c-49b5-a153-92a5a6f504b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.822482 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.822537 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.822655 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c6h8ch659h576h658h544h54ch56ch594h65bh56ch54fhfh598h5b8h5c4h567h75h5dhc9hbhfch97hcbh684h559h5d9h64bhdbh555h668h689q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vfmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76bfd776d9-fdg7f_openstack(6827937b-ebcc-45a6-98e3-08d49115503b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.825656 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-76bfd776d9-fdg7f" podUID="6827937b-ebcc-45a6-98e3-08d49115503b" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.846006 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.846913 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.847169 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n567h546hb4h5c4h9fh67h64chbbh5b4h68fh659hcdh696h574h666h59h555hfdh55ch59fh656h67bh58h689h5b5h5c6h564h697h579h8dh674h5c4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdfl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-98c8c74bf-wsl5f_openstack(c639af02-a4c8-40cf-947e-a50353ab2537): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:31 crc kubenswrapper[4873]: E0219 10:02:31.851918 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-98c8c74bf-wsl5f" podUID="c639af02-a4c8-40cf-947e-a50353ab2537" Feb 19 10:02:31 crc kubenswrapper[4873]: I0219 10:02:31.902624 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:31 crc kubenswrapper[4873]: I0219 10:02:31.908239 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:31 crc kubenswrapper[4873]: I0219 10:02:31.921641 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9472r" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.100033 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca\") pod \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101413 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle\") pod \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101470 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sphnf\" (UniqueName: \"kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data\") pod \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101548 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data\") pod \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101614 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data\") pod \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101649 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101690 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs\") pod \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101714 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101745 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle\") pod \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101784 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bxrn\" (UniqueName: \"kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn\") pod \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\" (UID: \"a575d51d-1ad3-422e-8e7c-b24b2c5de526\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101812 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101836 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101861 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data\") pod \"f631ba50-5961-428e-83a5-a8ddb50085d3\" (UID: \"f631ba50-5961-428e-83a5-a8ddb50085d3\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.101893 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btfz7\" (UniqueName: \"kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7\") pod \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\" (UID: \"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba\") " Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.102411 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs" (OuterVolumeSpecName: "logs") pod "a575d51d-1ad3-422e-8e7c-b24b2c5de526" (UID: "a575d51d-1ad3-422e-8e7c-b24b2c5de526"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.102653 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a575d51d-1ad3-422e-8e7c-b24b2c5de526-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.107251 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.107268 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" (UID: "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.110332 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn" (OuterVolumeSpecName: "kube-api-access-9bxrn") pod "a575d51d-1ad3-422e-8e7c-b24b2c5de526" (UID: "a575d51d-1ad3-422e-8e7c-b24b2c5de526"). InnerVolumeSpecName "kube-api-access-9bxrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.110451 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.112383 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7" (OuterVolumeSpecName: "kube-api-access-btfz7") pod "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" (UID: "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba"). InnerVolumeSpecName "kube-api-access-btfz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.114859 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf" (OuterVolumeSpecName: "kube-api-access-sphnf") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "kube-api-access-sphnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.121811 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts" (OuterVolumeSpecName: "scripts") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.128033 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" (UID: "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.135247 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.140544 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data" (OuterVolumeSpecName: "config-data") pod "f631ba50-5961-428e-83a5-a8ddb50085d3" (UID: "f631ba50-5961-428e-83a5-a8ddb50085d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.148213 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a575d51d-1ad3-422e-8e7c-b24b2c5de526" (UID: "a575d51d-1ad3-422e-8e7c-b24b2c5de526"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.158818 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a575d51d-1ad3-422e-8e7c-b24b2c5de526" (UID: "a575d51d-1ad3-422e-8e7c-b24b2c5de526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.160975 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data" (OuterVolumeSpecName: "config-data") pod "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" (UID: "7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.187608 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data" (OuterVolumeSpecName: "config-data") pod "a575d51d-1ad3-422e-8e7c-b24b2c5de526" (UID: "a575d51d-1ad3-422e-8e7c-b24b2c5de526"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204084 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204151 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sphnf\" (UniqueName: \"kubernetes.io/projected/f631ba50-5961-428e-83a5-a8ddb50085d3-kube-api-access-sphnf\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204196 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204207 4873 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204216 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204225 4873 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204232 4873 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204240 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204249 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bxrn\" (UniqueName: \"kubernetes.io/projected/a575d51d-1ad3-422e-8e7c-b24b2c5de526-kube-api-access-9bxrn\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204275 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204293 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204302 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f631ba50-5961-428e-83a5-a8ddb50085d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204313 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btfz7\" (UniqueName: \"kubernetes.io/projected/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba-kube-api-access-btfz7\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.204322 4873 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a575d51d-1ad3-422e-8e7c-b24b2c5de526-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.750450 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nf742" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.750404 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nf742" event={"ID":"f631ba50-5961-428e-83a5-a8ddb50085d3","Type":"ContainerDied","Data":"f5279a2fd1b4d18b8c04c2e7d237f62d2fc966d80132b8b737d8df303a78f856"} Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.750528 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5279a2fd1b4d18b8c04c2e7d237f62d2fc966d80132b8b737d8df303a78f856" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.753749 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.754012 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a575d51d-1ad3-422e-8e7c-b24b2c5de526","Type":"ContainerDied","Data":"581b3a000070ae451f6e9ef110e53c2e98989bd07a9fba9a09c69f4b1ecfba88"} Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.754044 4873 scope.go:117] "RemoveContainer" containerID="0782a10551b13dd61abc8e02874c936098093870d1d93858b972201b9cd3c7da" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.767778 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9472r" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.768260 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9472r" event={"ID":"7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba","Type":"ContainerDied","Data":"e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b"} Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.768306 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77590cdf691d40582aa326fa0b9971a60cdea85c0908db700bc4cf93f06c74b" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.841814 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.853548 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.860863 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:32 crc kubenswrapper[4873]: E0219 10:02:32.861218 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" containerName="glance-db-sync" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861232 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" containerName="glance-db-sync" Feb 19 10:02:32 crc kubenswrapper[4873]: E0219 10:02:32.861246 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861252 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" Feb 19 10:02:32 crc kubenswrapper[4873]: E0219 10:02:32.861270 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f631ba50-5961-428e-83a5-a8ddb50085d3" containerName="keystone-bootstrap" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861277 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f631ba50-5961-428e-83a5-a8ddb50085d3" containerName="keystone-bootstrap" Feb 19 10:02:32 crc kubenswrapper[4873]: E0219 10:02:32.861288 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api-log" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861293 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api-log" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861468 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f631ba50-5961-428e-83a5-a8ddb50085d3" containerName="keystone-bootstrap" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861483 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861494 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" containerName="glance-db-sync" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.861508 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api-log" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.862356 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.866974 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.899508 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.923055 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.923093 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.923176 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwmp\" (UniqueName: \"kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.923196 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:32 crc kubenswrapper[4873]: I0219 10:02:32.923271 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.027805 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.027908 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.027936 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.028000 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwmp\" (UniqueName: \"kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.028023 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.029755 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.032442 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.037341 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.038061 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.047792 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwmp\" (UniqueName: \"kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp\") pod \"watcher-api-0\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.151116 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nf742"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.161744 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nf742"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.191045 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.215890 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wrcpc"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.217096 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.218896 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.218991 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-27d74" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.219433 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.222962 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.223187 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.243655 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.243782 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.243843 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.244047 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctkh\" (UniqueName: \"kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.244176 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.244196 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.253967 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wrcpc"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345507 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345797 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345853 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctkh\" (UniqueName: \"kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345889 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345904 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.345930 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.354690 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.356396 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.359563 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.359640 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.365651 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.365899 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.367789 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.372184 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.393036 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctkh\" (UniqueName: \"kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh\") pod \"keystone-bootstrap-wrcpc\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447184 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447238 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447267 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mv6\" (UniqueName: \"kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447287 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447458 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.447535 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.500220 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" path="/var/lib/kubelet/pods/a575d51d-1ad3-422e-8e7c-b24b2c5de526/volumes" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.500939 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f631ba50-5961-428e-83a5-a8ddb50085d3" path="/var/lib/kubelet/pods/f631ba50-5961-428e-83a5-a8ddb50085d3/volumes" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549542 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549744 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549784 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549808 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549828 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mv6\" (UniqueName: \"kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.549850 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.550485 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.550733 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.550754 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.551590 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.551656 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.551818 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.567663 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mv6\" (UniqueName: \"kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6\") pod \"dnsmasq-dns-5687f4c549-n4g4v\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:33 crc kubenswrapper[4873]: I0219 10:02:33.788735 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.379931 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.381870 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.383597 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n9qxt" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.384317 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.384522 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.416950 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464689 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464734 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464778 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464828 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464916 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.464938 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288js\" (UniqueName: \"kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.465056 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.475865 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.477511 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.480223 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.489980 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.566866 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567185 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567220 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567259 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567286 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567304 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288js\" (UniqueName: \"kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.567402 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.568223 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.568680 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.573308 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.573865 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.582264 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.582882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.591330 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288js\" (UniqueName: \"kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.612654 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669262 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669320 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669359 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669383 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669477 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669496 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxh4r\" (UniqueName: \"kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.669544 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.720454 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.770861 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.770921 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.770949 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.771038 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.771062 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxh4r\" (UniqueName: \"kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.771099 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.771181 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.771317 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.772004 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.772341 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.775026 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.780810 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.781453 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.801636 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxh4r\" (UniqueName: \"kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:34 crc kubenswrapper[4873]: I0219 10:02:34.803217 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:35 crc kubenswrapper[4873]: I0219 10:02:35.093326 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:36 crc kubenswrapper[4873]: I0219 10:02:36.056680 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a575d51d-1ad3-422e-8e7c-b24b2c5de526" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:02:36 crc kubenswrapper[4873]: I0219 10:02:36.127165 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:36 crc kubenswrapper[4873]: I0219 10:02:36.213301 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:37 crc kubenswrapper[4873]: I0219 10:02:37.528666 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: i/o timeout" Feb 19 10:02:37 crc kubenswrapper[4873]: I0219 10:02:37.529008 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:02:42 crc kubenswrapper[4873]: I0219 10:02:42.529294 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: i/o timeout" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.518755 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.520803 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.526657 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.531081 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643201 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data\") pod \"6827937b-ebcc-45a6-98e3-08d49115503b\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643306 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data\") pod \"c639af02-a4c8-40cf-947e-a50353ab2537\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data\") pod \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643368 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643425 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs\") pod \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643455 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfmw\" (UniqueName: \"kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw\") pod \"6827937b-ebcc-45a6-98e3-08d49115503b\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643706 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643733 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643792 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs\") pod \"c639af02-a4c8-40cf-947e-a50353ab2537\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643832 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts\") pod \"6827937b-ebcc-45a6-98e3-08d49115503b\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643856 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdfl7\" (UniqueName: \"kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7\") pod \"c639af02-a4c8-40cf-947e-a50353ab2537\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643915 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts\") pod \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643950 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.643973 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts\") pod \"c639af02-a4c8-40cf-947e-a50353ab2537\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.644000 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs\") pod \"6827937b-ebcc-45a6-98e3-08d49115503b\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.645637 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs" (OuterVolumeSpecName: "logs") pod "c639af02-a4c8-40cf-947e-a50353ab2537" (UID: "c639af02-a4c8-40cf-947e-a50353ab2537"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.645845 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs" (OuterVolumeSpecName: "logs") pod "6827937b-ebcc-45a6-98e3-08d49115503b" (UID: "6827937b-ebcc-45a6-98e3-08d49115503b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.645989 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7h7\" (UniqueName: \"kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7\") pod \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.646186 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k6b5\" (UniqueName: \"kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.646331 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key\") pod \"6827937b-ebcc-45a6-98e3-08d49115503b\" (UID: \"6827937b-ebcc-45a6-98e3-08d49115503b\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.646421 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts" (OuterVolumeSpecName: "scripts") pod "6827937b-ebcc-45a6-98e3-08d49115503b" (UID: "6827937b-ebcc-45a6-98e3-08d49115503b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.646627 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc\") pod \"8e8c0292-715e-4d4d-a552-5229adfc3e74\" (UID: \"8e8c0292-715e-4d4d-a552-5229adfc3e74\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.647015 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key\") pod \"c639af02-a4c8-40cf-947e-a50353ab2537\" (UID: \"c639af02-a4c8-40cf-947e-a50353ab2537\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.647214 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key\") pod \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\" (UID: \"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d\") " Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.647326 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data" (OuterVolumeSpecName: "config-data") pod "6827937b-ebcc-45a6-98e3-08d49115503b" (UID: "6827937b-ebcc-45a6-98e3-08d49115503b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.647558 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs" (OuterVolumeSpecName: "logs") pod "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" (UID: "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.647853 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data" (OuterVolumeSpecName: "config-data") pod "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" (UID: "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648916 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648936 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648951 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648959 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c639af02-a4c8-40cf-947e-a50353ab2537-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648969 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6827937b-ebcc-45a6-98e3-08d49115503b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.648977 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6827937b-ebcc-45a6-98e3-08d49115503b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.649058 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data" (OuterVolumeSpecName: "config-data") pod "c639af02-a4c8-40cf-947e-a50353ab2537" (UID: "c639af02-a4c8-40cf-947e-a50353ab2537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.654275 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts" (OuterVolumeSpecName: "scripts") pod "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" (UID: "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.656380 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts" (OuterVolumeSpecName: "scripts") pod "c639af02-a4c8-40cf-947e-a50353ab2537" (UID: "c639af02-a4c8-40cf-947e-a50353ab2537"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.658661 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7" (OuterVolumeSpecName: "kube-api-access-pdfl7") pod "c639af02-a4c8-40cf-947e-a50353ab2537" (UID: "c639af02-a4c8-40cf-947e-a50353ab2537"). InnerVolumeSpecName "kube-api-access-pdfl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.669232 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw" (OuterVolumeSpecName: "kube-api-access-7vfmw") pod "6827937b-ebcc-45a6-98e3-08d49115503b" (UID: "6827937b-ebcc-45a6-98e3-08d49115503b"). InnerVolumeSpecName "kube-api-access-7vfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.674516 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c639af02-a4c8-40cf-947e-a50353ab2537" (UID: "c639af02-a4c8-40cf-947e-a50353ab2537"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.693301 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6827937b-ebcc-45a6-98e3-08d49115503b" (UID: "6827937b-ebcc-45a6-98e3-08d49115503b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.693557 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7" (OuterVolumeSpecName: "kube-api-access-xf7h7") pod "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" (UID: "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d"). InnerVolumeSpecName "kube-api-access-xf7h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.695649 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" (UID: "dca31fe9-df4d-4734-afcd-b0ebf4a54e4d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.696231 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5" (OuterVolumeSpecName: "kube-api-access-2k6b5") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "kube-api-access-2k6b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.728295 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.728549 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.728774 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.729934 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config" (OuterVolumeSpecName: "config") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.738682 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e8c0292-715e-4d4d-a552-5229adfc3e74" (UID: "8e8c0292-715e-4d4d-a552-5229adfc3e74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751204 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfmw\" (UniqueName: \"kubernetes.io/projected/6827937b-ebcc-45a6-98e3-08d49115503b-kube-api-access-7vfmw\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751243 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751257 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751268 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdfl7\" (UniqueName: \"kubernetes.io/projected/c639af02-a4c8-40cf-947e-a50353ab2537-kube-api-access-pdfl7\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751279 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751290 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751300 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751311 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7h7\" (UniqueName: \"kubernetes.io/projected/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-kube-api-access-xf7h7\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751322 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k6b5\" (UniqueName: \"kubernetes.io/projected/8e8c0292-715e-4d4d-a552-5229adfc3e74-kube-api-access-2k6b5\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751333 4873 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6827937b-ebcc-45a6-98e3-08d49115503b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751344 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751355 4873 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c639af02-a4c8-40cf-947e-a50353ab2537-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751367 4873 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751377 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c639af02-a4c8-40cf-947e-a50353ab2537-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.751389 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8c0292-715e-4d4d-a552-5229adfc3e74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.877801 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-98c8c74bf-wsl5f" event={"ID":"c639af02-a4c8-40cf-947e-a50353ab2537","Type":"ContainerDied","Data":"9eac663eb58aa13e3523f32d4bcb37aa001e4ee953f53b136a077545dfbf1008"} Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.877847 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-98c8c74bf-wsl5f" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.879363 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bfd776d9-fdg7f" event={"ID":"6827937b-ebcc-45a6-98e3-08d49115503b","Type":"ContainerDied","Data":"3316c81d04a3ec6d98aa8cad078c0b0a6499f0b69e6af433f3ff3dc9ecbf7c2c"} Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.879411 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bfd776d9-fdg7f" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.882566 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.882550 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" event={"ID":"8e8c0292-715e-4d4d-a552-5229adfc3e74","Type":"ContainerDied","Data":"04cdcce41cf06a6a4c22d13c5a42c60370cf2135128656d07548457adad958ae"} Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.884881 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79f476f4fc-dsgbh" event={"ID":"dca31fe9-df4d-4734-afcd-b0ebf4a54e4d","Type":"ContainerDied","Data":"60cadfc4a70dc8e7874b849ac63b6a6b2ae5cbe2b77781abafab78ab09e65314"} Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.884969 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79f476f4fc-dsgbh" Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.926963 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:02:43 crc kubenswrapper[4873]: I0219 10:02:43.933976 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f77dfd79f-tg9w4"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.013081 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.033360 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76bfd776d9-fdg7f"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.063765 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.073857 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79f476f4fc-dsgbh"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.088753 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.095677 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-98c8c74bf-wsl5f"] Feb 19 10:02:44 crc kubenswrapper[4873]: E0219 10:02:44.562005 4873 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 19 10:02:44 crc kubenswrapper[4873]: E0219 10:02:44.562056 4873 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.20:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 19 10:02:44 crc kubenswrapper[4873]: E0219 10:02:44.562216 4873 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.20:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7ttv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gqrb5_openstack(ce5accb4-1da0-4a21-a289-7dba33ad935f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:02:44 crc kubenswrapper[4873]: E0219 10:02:44.563413 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gqrb5" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" Feb 19 10:02:44 crc kubenswrapper[4873]: I0219 10:02:44.588335 4873 scope.go:117] "RemoveContainer" containerID="e625952875a51eb5caf68d1f4611160ae7316ded4cd569ca37c367b0a5f6884b" Feb 19 10:02:44 crc kubenswrapper[4873]: E0219 10:02:44.900593 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.20:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-gqrb5" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.017868 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6687d9896d-v96j2"] Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.177353 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:02:45 crc kubenswrapper[4873]: W0219 10:02:45.442090 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa527f64_6e38_48c2_9927_a319f4579070.slice/crio-080ebd3682c04f85d87960e452b1d2aad6833e4217c592f0f75127df01aa50cf WatchSource:0}: Error finding container 080ebd3682c04f85d87960e452b1d2aad6833e4217c592f0f75127df01aa50cf: Status 404 returned error can't find the container with id 080ebd3682c04f85d87960e452b1d2aad6833e4217c592f0f75127df01aa50cf Feb 19 10:02:45 crc kubenswrapper[4873]: W0219 10:02:45.445749 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcace1157_1459_4823_aa8f_b2c246d3adeb.slice/crio-9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e WatchSource:0}: Error finding container 9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e: Status 404 returned error can't find the container with id 9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.530705 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6827937b-ebcc-45a6-98e3-08d49115503b" path="/var/lib/kubelet/pods/6827937b-ebcc-45a6-98e3-08d49115503b/volumes" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.531479 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" path="/var/lib/kubelet/pods/8e8c0292-715e-4d4d-a552-5229adfc3e74/volumes" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.533093 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c639af02-a4c8-40cf-947e-a50353ab2537" path="/var/lib/kubelet/pods/c639af02-a4c8-40cf-947e-a50353ab2537/volumes" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.534013 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca31fe9-df4d-4734-afcd-b0ebf4a54e4d" path="/var/lib/kubelet/pods/dca31fe9-df4d-4734-afcd-b0ebf4a54e4d/volumes" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.580202 4873 scope.go:117] "RemoveContainer" containerID="3202df88b506237a1560baea9fc86854fa472069f50ad6c6f94a7855eaa6ff1a" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.714046 4873 scope.go:117] "RemoveContainer" containerID="c38f23c9308a52dc889562a59a6b3d3134f3aebd40d9ab2a2804a839bf127153" Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.916050 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6687d9896d-v96j2" event={"ID":"fa527f64-6e38-48c2-9927-a319f4579070","Type":"ContainerStarted","Data":"080ebd3682c04f85d87960e452b1d2aad6833e4217c592f0f75127df01aa50cf"} Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.918588 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerStarted","Data":"9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e"} Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.921354 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95402218-fbbb-4453-aba6-d135ba3a26bd","Type":"ContainerStarted","Data":"99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed"} Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.955617 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wrcpc"] Feb 19 10:02:45 crc kubenswrapper[4873]: I0219 10:02:45.960450 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=5.416168261 podStartE2EDuration="35.960431043s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="2026-02-19 10:02:12.800174861 +0000 UTC m=+1042.089606499" lastFinishedPulling="2026-02-19 10:02:43.344437613 +0000 UTC m=+1072.633869281" observedRunningTime="2026-02-19 10:02:45.942672372 +0000 UTC m=+1075.232104010" watchObservedRunningTime="2026-02-19 10:02:45.960431043 +0000 UTC m=+1075.249862671" Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.070995 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.170854 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.259803 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.291784 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.393983 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:46 crc kubenswrapper[4873]: W0219 10:02:46.418585 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefd66693_8b61_499e_a8c8_f8545b8fcced.slice/crio-3a663286d199f6315c6fc0d35eabda8f21a34ebbb20580efa338c69320c9e613 WatchSource:0}: Error finding container 3a663286d199f6315c6fc0d35eabda8f21a34ebbb20580efa338c69320c9e613: Status 404 returned error can't find the container with id 3a663286d199f6315c6fc0d35eabda8f21a34ebbb20580efa338c69320c9e613 Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.935499 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b8008736-31ec-491c-aa52-03b9413feab9","Type":"ContainerStarted","Data":"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.939423 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerStarted","Data":"ab78ff8f102582e8303092caf70a3741f3f6463262da1956c73f2a02fe74dcaa"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.943305 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4pv5z" event={"ID":"943d069e-6ad4-4411-b937-c4499f0ced6f","Type":"ContainerStarted","Data":"e48e8c3f3cd0f5266c11f4e70aa13217161d56e20c834665683a06bd7308e111"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.946580 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerStarted","Data":"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.964040 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=17.92166552 podStartE2EDuration="36.964020801s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="2026-02-19 10:02:12.739310721 +0000 UTC m=+1042.028742369" lastFinishedPulling="2026-02-19 10:02:31.781666012 +0000 UTC m=+1061.071097650" observedRunningTime="2026-02-19 10:02:46.960310029 +0000 UTC m=+1076.249741667" watchObservedRunningTime="2026-02-19 10:02:46.964020801 +0000 UTC m=+1076.253452449" Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.983712 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6687d9896d-v96j2" event={"ID":"fa527f64-6e38-48c2-9927-a319f4579070","Type":"ContainerStarted","Data":"41ce1383b1efd52fcaaa5f36442aae9782ce54733f0cce74dcd958919193025c"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.983945 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6687d9896d-v96j2" event={"ID":"fa527f64-6e38-48c2-9927-a319f4579070","Type":"ContainerStarted","Data":"e4d390edab0549e2140b5a29610a7484352c87ec56208011ef852543b6dab746"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.989819 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerStarted","Data":"63ad458896e287b047bf524f55c92f7c8b727110e2a9fed53b0acb11b2c99b24"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.989860 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerStarted","Data":"cd1c8470e0eda76a81d6e96f1fc492dbb2365558bb9cd6d4a1af5b28681c31e0"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.989870 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerStarted","Data":"5c219368094a8a1c527a293923d373b84ac19d1c24f32f1ef5514bf9fa1b2437"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.990989 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.993412 4873 generic.go:334] "Generic (PLEG): container finished" podID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerID="2156fdadae7d71bb536233ced37bfe76646867be4fb2b42c0784cff65fb2da11" exitCode=0 Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.993463 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" event={"ID":"64deb684-42f6-4bb5-b774-ef57839a56d5","Type":"ContainerDied","Data":"2156fdadae7d71bb536233ced37bfe76646867be4fb2b42c0784cff65fb2da11"} Feb 19 10:02:46 crc kubenswrapper[4873]: I0219 10:02:46.993482 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" event={"ID":"64deb684-42f6-4bb5-b774-ef57839a56d5","Type":"ContainerStarted","Data":"de0152a57feb0720c3ff97d1d52995f66e5e8c9b3cc0aff67e6dfa65b92a668a"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.007449 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4pv5z" podStartSLOduration=5.487297785 podStartE2EDuration="37.007432358s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="2026-02-19 10:02:12.987773295 +0000 UTC m=+1042.277204933" lastFinishedPulling="2026-02-19 10:02:44.507907848 +0000 UTC m=+1073.797339506" observedRunningTime="2026-02-19 10:02:46.983480664 +0000 UTC m=+1076.272912302" watchObservedRunningTime="2026-02-19 10:02:47.007432358 +0000 UTC m=+1076.296863996" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.024395 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerStarted","Data":"3a663286d199f6315c6fc0d35eabda8f21a34ebbb20580efa338c69320c9e613"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.055008 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6687d9896d-v96j2" podStartSLOduration=26.681275187 podStartE2EDuration="27.054988648s" podCreationTimestamp="2026-02-19 10:02:20 +0000 UTC" firstStartedPulling="2026-02-19 10:02:45.476476567 +0000 UTC m=+1074.765908215" lastFinishedPulling="2026-02-19 10:02:45.850190028 +0000 UTC m=+1075.139621676" observedRunningTime="2026-02-19 10:02:47.007397037 +0000 UTC m=+1076.296828675" watchObservedRunningTime="2026-02-19 10:02:47.054988648 +0000 UTC m=+1076.344420276" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.063056 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerStarted","Data":"f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.063094 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerStarted","Data":"1a1b6f4ba694daddb17f029a0bbce06c79e8294e69f096dade9d91ac98c03f81"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.075373 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98gbw" event={"ID":"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6","Type":"ContainerStarted","Data":"ad115d69dacdb41d674a33e1db809c1ccc6821733d5a2f7e41e2ae5cb63809b4"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.078879 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=15.07884934 podStartE2EDuration="15.07884934s" podCreationTimestamp="2026-02-19 10:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:47.026777928 +0000 UTC m=+1076.316209566" watchObservedRunningTime="2026-02-19 10:02:47.07884934 +0000 UTC m=+1076.368280978" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.097248 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrcpc" event={"ID":"58099bc8-1a29-467b-b13d-c0713e42e6c2","Type":"ContainerStarted","Data":"a196b181363d1056b517a86bc31a20f9a28399d782296cb45561ba646a621a77"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.097299 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrcpc" event={"ID":"58099bc8-1a29-467b-b13d-c0713e42e6c2","Type":"ContainerStarted","Data":"ed07e84b25a983ea418f7869298191a80a5dc7f605f24c8aefa1a6f1b3d88cd5"} Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.119473 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-87df9b646-2jf26" podStartSLOduration=26.769485796 podStartE2EDuration="27.119452987s" podCreationTimestamp="2026-02-19 10:02:20 +0000 UTC" firstStartedPulling="2026-02-19 10:02:45.476455557 +0000 UTC m=+1074.765887215" lastFinishedPulling="2026-02-19 10:02:45.826422748 +0000 UTC m=+1075.115854406" observedRunningTime="2026-02-19 10:02:47.080913461 +0000 UTC m=+1076.370345099" watchObservedRunningTime="2026-02-19 10:02:47.119452987 +0000 UTC m=+1076.408884625" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.127896 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-98gbw" podStartSLOduration=3.722887562 podStartE2EDuration="36.127877836s" podCreationTimestamp="2026-02-19 10:02:11 +0000 UTC" firstStartedPulling="2026-02-19 10:02:13.225126884 +0000 UTC m=+1042.514558522" lastFinishedPulling="2026-02-19 10:02:45.630117148 +0000 UTC m=+1074.919548796" observedRunningTime="2026-02-19 10:02:47.099682517 +0000 UTC m=+1076.389114155" watchObservedRunningTime="2026-02-19 10:02:47.127877836 +0000 UTC m=+1076.417309474" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.136768 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wrcpc" podStartSLOduration=14.136753087 podStartE2EDuration="14.136753087s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:47.125175139 +0000 UTC m=+1076.414606777" watchObservedRunningTime="2026-02-19 10:02:47.136753087 +0000 UTC m=+1076.426184725" Feb 19 10:02:47 crc kubenswrapper[4873]: I0219 10:02:47.533216 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f77dfd79f-tg9w4" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: i/o timeout" Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.139907 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerStarted","Data":"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784"} Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.141911 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerStarted","Data":"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3"} Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.150979 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" event={"ID":"64deb684-42f6-4bb5-b774-ef57839a56d5","Type":"ContainerStarted","Data":"b6d3e647789f4d02e72fca723b661c289177f805a67c5edd75747ee3947add92"} Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.175439 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" podStartSLOduration=15.175412285 podStartE2EDuration="15.175412285s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:48.168492683 +0000 UTC m=+1077.457924321" watchObservedRunningTime="2026-02-19 10:02:48.175412285 +0000 UTC m=+1077.464843923" Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.192360 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:02:48 crc kubenswrapper[4873]: I0219 10:02:48.789469 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.163151 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerStarted","Data":"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269"} Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.163224 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-log" containerID="cri-o://d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" gracePeriod=30 Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.163306 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-httpd" containerID="cri-o://f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" gracePeriod=30 Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.165706 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerStarted","Data":"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9"} Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.165964 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.166438 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-log" containerID="cri-o://da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" gracePeriod=30 Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.166600 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-httpd" containerID="cri-o://d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" gracePeriod=30 Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.190879 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.190847706 podStartE2EDuration="16.190847706s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:49.183402541 +0000 UTC m=+1078.472834179" watchObservedRunningTime="2026-02-19 10:02:49.190847706 +0000 UTC m=+1078.480279344" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.213469 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.213449827 podStartE2EDuration="16.213449827s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:49.202937586 +0000 UTC m=+1078.492369214" watchObservedRunningTime="2026-02-19 10:02:49.213449827 +0000 UTC m=+1078.502881465" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.872645 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.954461 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992438 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992505 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288js\" (UniqueName: \"kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992548 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992608 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992640 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992713 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.992732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs\") pod \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\" (UID: \"6e3a8bf8-a885-4a50-97d2-53df598b1ce9\") " Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.996238 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs" (OuterVolumeSpecName: "logs") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4873]: I0219 10:02:49.997450 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.002476 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts" (OuterVolumeSpecName: "scripts") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.002492 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.007264 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js" (OuterVolumeSpecName: "kube-api-access-288js") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "kube-api-access-288js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.033222 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.066703 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data" (OuterVolumeSpecName: "config-data") pod "6e3a8bf8-a885-4a50-97d2-53df598b1ce9" (UID: "6e3a8bf8-a885-4a50-97d2-53df598b1ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.096780 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.096831 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.096859 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.096935 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxh4r\" (UniqueName: \"kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097026 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097055 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097129 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle\") pod \"efd66693-8b61-499e-a8c8-f8545b8fcced\" (UID: \"efd66693-8b61-499e-a8c8-f8545b8fcced\") " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097441 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097452 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097460 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097468 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288js\" (UniqueName: \"kubernetes.io/projected/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-kube-api-access-288js\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097478 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097486 4873 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e3a8bf8-a885-4a50-97d2-53df598b1ce9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.097504 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.101359 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.101598 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r" (OuterVolumeSpecName: "kube-api-access-wxh4r") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "kube-api-access-wxh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.101611 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs" (OuterVolumeSpecName: "logs") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.104333 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts" (OuterVolumeSpecName: "scripts") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.107253 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.117791 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.136283 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.159846 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data" (OuterVolumeSpecName: "config-data") pod "efd66693-8b61-499e-a8c8-f8545b8fcced" (UID: "efd66693-8b61-499e-a8c8-f8545b8fcced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.183942 4873 generic.go:334] "Generic (PLEG): container finished" podID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerID="d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" exitCode=0 Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.184971 4873 generic.go:334] "Generic (PLEG): container finished" podID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerID="da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" exitCode=143 Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.184074 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.184027 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerDied","Data":"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.185359 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerDied","Data":"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.185381 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e3a8bf8-a885-4a50-97d2-53df598b1ce9","Type":"ContainerDied","Data":"ab78ff8f102582e8303092caf70a3741f3f6463262da1956c73f2a02fe74dcaa"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.185401 4873 scope.go:117] "RemoveContainer" containerID="d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.190352 4873 generic.go:334] "Generic (PLEG): container finished" podID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerID="f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" exitCode=0 Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.190381 4873 generic.go:334] "Generic (PLEG): container finished" podID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerID="d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" exitCode=143 Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.191306 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.192462 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerDied","Data":"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.192538 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerDied","Data":"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.192551 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"efd66693-8b61-499e-a8c8-f8545b8fcced","Type":"ContainerDied","Data":"3a663286d199f6315c6fc0d35eabda8f21a34ebbb20580efa338c69320c9e613"} Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.192650 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198718 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198749 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198759 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198768 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd66693-8b61-499e-a8c8-f8545b8fcced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198777 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198785 4873 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd66693-8b61-499e-a8c8-f8545b8fcced-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198812 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.198821 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxh4r\" (UniqueName: \"kubernetes.io/projected/efd66693-8b61-499e-a8c8-f8545b8fcced-kube-api-access-wxh4r\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.217717 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.224065 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.237198 4873 scope.go:117] "RemoveContainer" containerID="da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.253055 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.261005 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.273002 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.276685 4873 scope.go:117] "RemoveContainer" containerID="d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.292204 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9\": container with ID starting with d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9 not found: ID does not exist" containerID="d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.292444 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9"} err="failed to get container status \"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9\": rpc error: code = NotFound desc = could not find container \"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9\": container with ID starting with d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9 not found: ID does not exist" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.292522 4873 scope.go:117] "RemoveContainer" containerID="da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.302365 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3\": container with ID starting with da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3 not found: ID does not exist" containerID="da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.302409 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3"} err="failed to get container status \"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3\": rpc error: code = NotFound desc = could not find container \"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3\": container with ID starting with da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3 not found: ID does not exist" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.302440 4873 scope.go:117] "RemoveContainer" containerID="d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.302850 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9"} err="failed to get container status \"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9\": rpc error: code = NotFound desc = could not find container \"d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9\": container with ID starting with d8c9d310f509d718822d8a656e17a11949d51434323480d7753635e919f55fb9 not found: ID does not exist" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.302897 4873 scope.go:117] "RemoveContainer" containerID="da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.303593 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.310853 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3"} err="failed to get container status \"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3\": rpc error: code = NotFound desc = could not find container \"da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3\": container with ID starting with da8e8797e08ad3efb1311ccadaefae92c7b004bdf2afd3678b7f82b472bcc5e3 not found: ID does not exist" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.310906 4873 scope.go:117] "RemoveContainer" containerID="f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.325036 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326161 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326242 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326301 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326347 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326413 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326461 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326518 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326565 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326624 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="init" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326672 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="init" Feb 19 10:02:50 crc kubenswrapper[4873]: E0219 10:02:50.326732 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.326859 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.327326 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.353319 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.353366 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-log" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.353393 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8c0292-715e-4d4d-a552-5229adfc3e74" containerName="dnsmasq-dns" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.353414 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" containerName="glance-httpd" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.355227 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.355290 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.355376 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.362175 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.370236 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.370801 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.370869 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n9qxt" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.371283 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.373197 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.373582 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.374294 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.381431 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507260 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507616 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507671 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507700 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507752 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507779 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507817 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507846 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507917 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507959 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.507990 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.508013 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcss\" (UniqueName: \"kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.508036 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.508074 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhjk\" (UniqueName: \"kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.508148 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.508179 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.610687 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.610776 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.610800 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.610817 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggcss\" (UniqueName: \"kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611634 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611665 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhjk\" (UniqueName: \"kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611704 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611728 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611781 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611841 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611862 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611860 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611898 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611922 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611949 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611973 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.611856 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.612758 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.613459 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.616552 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.618569 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.620744 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.623367 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.632760 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.635071 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.637065 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.638163 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhjk\" (UniqueName: \"kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.640493 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.640531 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggcss\" (UniqueName: \"kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.642682 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.648636 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.679247 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.701664 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.733364 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.744744 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.842045 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.842167 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.948685 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:50 crc kubenswrapper[4873]: I0219 10:02:50.948756 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.042754 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.082679 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.205587 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.239758 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.291964 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.292042 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.328041 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.511382 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3a8bf8-a885-4a50-97d2-53df598b1ce9" path="/var/lib/kubelet/pods/6e3a8bf8-a885-4a50-97d2-53df598b1ce9/volumes" Feb 19 10:02:51 crc kubenswrapper[4873]: I0219 10:02:51.512808 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd66693-8b61-499e-a8c8-f8545b8fcced" path="/var/lib/kubelet/pods/efd66693-8b61-499e-a8c8-f8545b8fcced/volumes" Feb 19 10:02:52 crc kubenswrapper[4873]: I0219 10:02:52.239084 4873 generic.go:334] "Generic (PLEG): container finished" podID="58099bc8-1a29-467b-b13d-c0713e42e6c2" containerID="a196b181363d1056b517a86bc31a20f9a28399d782296cb45561ba646a621a77" exitCode=0 Feb 19 10:02:52 crc kubenswrapper[4873]: I0219 10:02:52.239198 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrcpc" event={"ID":"58099bc8-1a29-467b-b13d-c0713e42e6c2","Type":"ContainerDied","Data":"a196b181363d1056b517a86bc31a20f9a28399d782296cb45561ba646a621a77"} Feb 19 10:02:52 crc kubenswrapper[4873]: I0219 10:02:52.283168 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 10:02:52 crc kubenswrapper[4873]: I0219 10:02:52.342468 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.191856 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.197028 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.249915 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="b8008736-31ec-491c-aa52-03b9413feab9" containerName="watcher-decision-engine" containerID="cri-o://a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149" gracePeriod=30 Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.256639 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.790266 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.851971 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:53 crc kubenswrapper[4873]: I0219 10:02:53.857866 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="dnsmasq-dns" containerID="cri-o://7a12463c2cf197b1f920440df50985d94ae3e7a22c56ad882d01bc741d80d703" gracePeriod=10 Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.265473 4873 scope.go:117] "RemoveContainer" containerID="d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.272012 4873 generic.go:334] "Generic (PLEG): container finished" podID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerID="7a12463c2cf197b1f920440df50985d94ae3e7a22c56ad882d01bc741d80d703" exitCode=0 Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.272195 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerName="watcher-applier" containerID="cri-o://99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" gracePeriod=30 Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.272465 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" event={"ID":"402372ed-3c0d-4d12-a4f5-bbd82024a08d","Type":"ContainerDied","Data":"7a12463c2cf197b1f920440df50985d94ae3e7a22c56ad882d01bc741d80d703"} Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.360949 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.432330 4873 scope.go:117] "RemoveContainer" containerID="f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" Feb 19 10:02:54 crc kubenswrapper[4873]: E0219 10:02:54.439647 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269\": container with ID starting with f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269 not found: ID does not exist" containerID="f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.439698 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269"} err="failed to get container status \"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269\": rpc error: code = NotFound desc = could not find container \"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269\": container with ID starting with f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.439719 4873 scope.go:117] "RemoveContainer" containerID="d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" Feb 19 10:02:54 crc kubenswrapper[4873]: E0219 10:02:54.446055 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784\": container with ID starting with d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784 not found: ID does not exist" containerID="d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.446341 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784"} err="failed to get container status \"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784\": rpc error: code = NotFound desc = could not find container \"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784\": container with ID starting with d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.446361 4873 scope.go:117] "RemoveContainer" containerID="f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.450307 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269"} err="failed to get container status \"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269\": rpc error: code = NotFound desc = could not find container \"f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269\": container with ID starting with f6480951dbeac0ab665a2aa5fd54473ab3ba4b503cfac0f509921fa0368f3269 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.450340 4873 scope.go:117] "RemoveContainer" containerID="d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.451011 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784"} err="failed to get container status \"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784\": rpc error: code = NotFound desc = could not find container \"d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784\": container with ID starting with d2e4e37b9a2920a7efd73dedf273c3a7b41869d40b515eddd13310356cb89784 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.498683 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.498726 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.498831 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jctkh\" (UniqueName: \"kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.498867 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.498901 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.499004 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle\") pod \"58099bc8-1a29-467b-b13d-c0713e42e6c2\" (UID: \"58099bc8-1a29-467b-b13d-c0713e42e6c2\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.506858 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.507270 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts" (OuterVolumeSpecName: "scripts") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.517824 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh" (OuterVolumeSpecName: "kube-api-access-jctkh") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "kube-api-access-jctkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.546984 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.552044 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.592280 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data" (OuterVolumeSpecName: "config-data") pod "58099bc8-1a29-467b-b13d-c0713e42e6c2" (UID: "58099bc8-1a29-467b-b13d-c0713e42e6c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601366 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601396 4873 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601407 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601415 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jctkh\" (UniqueName: \"kubernetes.io/projected/58099bc8-1a29-467b-b13d-c0713e42e6c2-kube-api-access-jctkh\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601427 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.601437 4873 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58099bc8-1a29-467b-b13d-c0713e42e6c2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.739053 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.905887 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.905969 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.905998 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vchwr\" (UniqueName: \"kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.906067 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.906123 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.906176 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config\") pod \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\" (UID: \"402372ed-3c0d-4d12-a4f5-bbd82024a08d\") " Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.910129 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr" (OuterVolumeSpecName: "kube-api-access-vchwr") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "kube-api-access-vchwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.951230 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.960195 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.962698 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.967337 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config" (OuterVolumeSpecName: "config") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.981390 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "402372ed-3c0d-4d12-a4f5-bbd82024a08d" (UID: "402372ed-3c0d-4d12-a4f5-bbd82024a08d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:54 crc kubenswrapper[4873]: I0219 10:02:54.989445 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:54 crc kubenswrapper[4873]: W0219 10:02:54.993307 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a3b30a9_f42d_4ac8_a0d0_9c03d0071c7a.slice/crio-27dd31c2ce043db502b2393bca38366353aaef589f4415a46def82efc00bdbd7 WatchSource:0}: Error finding container 27dd31c2ce043db502b2393bca38366353aaef589f4415a46def82efc00bdbd7: Status 404 returned error can't find the container with id 27dd31c2ce043db502b2393bca38366353aaef589f4415a46def82efc00bdbd7 Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008676 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008713 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008726 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008736 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008746 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/402372ed-3c0d-4d12-a4f5-bbd82024a08d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.008754 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vchwr\" (UniqueName: \"kubernetes.io/projected/402372ed-3c0d-4d12-a4f5-bbd82024a08d-kube-api-access-vchwr\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.088787 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:55 crc kubenswrapper[4873]: W0219 10:02:55.095249 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da8b72b_fdc0_4c00_a1da_cdb5e8e04e8e.slice/crio-d67a114aa214902359c1e29c718493f6dd023a96ca4f5d6261a47eb5f1d136c6 WatchSource:0}: Error finding container d67a114aa214902359c1e29c718493f6dd023a96ca4f5d6261a47eb5f1d136c6: Status 404 returned error can't find the container with id d67a114aa214902359c1e29c718493f6dd023a96ca4f5d6261a47eb5f1d136c6 Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.308623 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerStarted","Data":"27dd31c2ce043db502b2393bca38366353aaef589f4415a46def82efc00bdbd7"} Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.323718 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wrcpc" event={"ID":"58099bc8-1a29-467b-b13d-c0713e42e6c2","Type":"ContainerDied","Data":"ed07e84b25a983ea418f7869298191a80a5dc7f605f24c8aefa1a6f1b3d88cd5"} Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.323756 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed07e84b25a983ea418f7869298191a80a5dc7f605f24c8aefa1a6f1b3d88cd5" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.323777 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wrcpc" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.350412 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" event={"ID":"402372ed-3c0d-4d12-a4f5-bbd82024a08d","Type":"ContainerDied","Data":"676e73049674f4406ff08bc00e5c61a6ce15ece9c685ed8a76d9fda336863789"} Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.350484 4873 scope.go:117] "RemoveContainer" containerID="7a12463c2cf197b1f920440df50985d94ae3e7a22c56ad882d01bc741d80d703" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.351186 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-99d6b5b4f-2j7fk" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.353738 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerStarted","Data":"d67a114aa214902359c1e29c718493f6dd023a96ca4f5d6261a47eb5f1d136c6"} Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.370866 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerStarted","Data":"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1"} Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.389666 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.403255 4873 scope.go:117] "RemoveContainer" containerID="0caa3e8656105d67ae98953c7c54ce1e536f9e27f2d0305163026fbf53ca79e0" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.414166 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-99d6b5b4f-2j7fk"] Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.507714 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" path="/var/lib/kubelet/pods/402372ed-3c0d-4d12-a4f5-bbd82024a08d/volumes" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.508426 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5fcd445c48-xvpw4"] Feb 19 10:02:55 crc kubenswrapper[4873]: E0219 10:02:55.510278 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="init" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.510304 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="init" Feb 19 10:02:55 crc kubenswrapper[4873]: E0219 10:02:55.510326 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="dnsmasq-dns" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.510336 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="dnsmasq-dns" Feb 19 10:02:55 crc kubenswrapper[4873]: E0219 10:02:55.510350 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58099bc8-1a29-467b-b13d-c0713e42e6c2" containerName="keystone-bootstrap" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.510358 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="58099bc8-1a29-467b-b13d-c0713e42e6c2" containerName="keystone-bootstrap" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.510630 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="402372ed-3c0d-4d12-a4f5-bbd82024a08d" containerName="dnsmasq-dns" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.510670 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="58099bc8-1a29-467b-b13d-c0713e42e6c2" containerName="keystone-bootstrap" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.511433 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fcd445c48-xvpw4"] Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.511552 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.524649 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.524915 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.525780 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-27d74" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.526152 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.526392 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.530095 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619421 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-fernet-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-config-data\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619596 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-internal-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619628 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-public-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619666 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g25z\" (UniqueName: \"kubernetes.io/projected/ed86f09e-909d-451b-96c0-9b4b7b27eb03-kube-api-access-8g25z\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619690 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-combined-ca-bundle\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619708 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-credential-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.619726 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-scripts\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723562 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-internal-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723643 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-public-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723674 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g25z\" (UniqueName: \"kubernetes.io/projected/ed86f09e-909d-451b-96c0-9b4b7b27eb03-kube-api-access-8g25z\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723712 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-combined-ca-bundle\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723736 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-credential-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723758 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-scripts\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723830 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-fernet-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.723857 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-config-data\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.730578 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-combined-ca-bundle\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.740278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-public-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.740945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-internal-tls-certs\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.743084 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-credential-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.743336 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-fernet-keys\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.743447 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-config-data\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.743698 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g25z\" (UniqueName: \"kubernetes.io/projected/ed86f09e-909d-451b-96c0-9b4b7b27eb03-kube-api-access-8g25z\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.756239 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed86f09e-909d-451b-96c0-9b4b7b27eb03-scripts\") pod \"keystone-5fcd445c48-xvpw4\" (UID: \"ed86f09e-909d-451b-96c0-9b4b7b27eb03\") " pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:55 crc kubenswrapper[4873]: I0219 10:02:55.844908 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.187061 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.298767 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed is running failed: container process not found" containerID="99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.299147 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed is running failed: container process not found" containerID="99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.299666 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed is running failed: container process not found" containerID="99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.299705 4873 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerName="watcher-applier" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.339396 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data\") pod \"b8008736-31ec-491c-aa52-03b9413feab9\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.339518 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle\") pod \"b8008736-31ec-491c-aa52-03b9413feab9\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.339553 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca\") pod \"b8008736-31ec-491c-aa52-03b9413feab9\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.339630 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf92r\" (UniqueName: \"kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r\") pod \"b8008736-31ec-491c-aa52-03b9413feab9\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.339791 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs\") pod \"b8008736-31ec-491c-aa52-03b9413feab9\" (UID: \"b8008736-31ec-491c-aa52-03b9413feab9\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.347359 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs" (OuterVolumeSpecName: "logs") pod "b8008736-31ec-491c-aa52-03b9413feab9" (UID: "b8008736-31ec-491c-aa52-03b9413feab9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.348985 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r" (OuterVolumeSpecName: "kube-api-access-cf92r") pod "b8008736-31ec-491c-aa52-03b9413feab9" (UID: "b8008736-31ec-491c-aa52-03b9413feab9"). InnerVolumeSpecName "kube-api-access-cf92r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.399893 4873 generic.go:334] "Generic (PLEG): container finished" podID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerID="99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" exitCode=0 Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.399967 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95402218-fbbb-4453-aba6-d135ba3a26bd","Type":"ContainerDied","Data":"99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.401895 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerStarted","Data":"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.409913 4873 generic.go:334] "Generic (PLEG): container finished" podID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" containerID="ad115d69dacdb41d674a33e1db809c1ccc6821733d5a2f7e41e2ae5cb63809b4" exitCode=0 Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.409953 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98gbw" event={"ID":"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6","Type":"ContainerDied","Data":"ad115d69dacdb41d674a33e1db809c1ccc6821733d5a2f7e41e2ae5cb63809b4"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.433552 4873 generic.go:334] "Generic (PLEG): container finished" podID="b8008736-31ec-491c-aa52-03b9413feab9" containerID="a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149" exitCode=1 Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.433614 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b8008736-31ec-491c-aa52-03b9413feab9","Type":"ContainerDied","Data":"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.433647 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b8008736-31ec-491c-aa52-03b9413feab9","Type":"ContainerDied","Data":"a2ea00441668ccb8b861c9df038d7b7675d321f71184b1fb21464b11d16d7eef"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.433665 4873 scope.go:117] "RemoveContainer" containerID="a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.433686 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.438512 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerStarted","Data":"00e17aa3c77a8dac057b0211e38bac6faa2ba84727374dd1e7825f5c8cd0363b"} Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.440086 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fcd445c48-xvpw4"] Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.445158 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf92r\" (UniqueName: \"kubernetes.io/projected/b8008736-31ec-491c-aa52-03b9413feab9-kube-api-access-cf92r\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.445184 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8008736-31ec-491c-aa52-03b9413feab9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.459399 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b8008736-31ec-491c-aa52-03b9413feab9" (UID: "b8008736-31ec-491c-aa52-03b9413feab9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.489418 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8008736-31ec-491c-aa52-03b9413feab9" (UID: "b8008736-31ec-491c-aa52-03b9413feab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.517252 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data" (OuterVolumeSpecName: "config-data") pod "b8008736-31ec-491c-aa52-03b9413feab9" (UID: "b8008736-31ec-491c-aa52-03b9413feab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.550284 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.550312 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.550324 4873 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b8008736-31ec-491c-aa52-03b9413feab9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.629253 4873 scope.go:117] "RemoveContainer" containerID="a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149" Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.630044 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149\": container with ID starting with a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149 not found: ID does not exist" containerID="a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.630084 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149"} err="failed to get container status \"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149\": rpc error: code = NotFound desc = could not find container \"a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149\": container with ID starting with a99eef84ecab86eece9faca56461f62ff2124a5b8881a9de7669714effc8b149 not found: ID does not exist" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.702699 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.773085 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.801182 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.812034 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.812681 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8008736-31ec-491c-aa52-03b9413feab9" containerName="watcher-decision-engine" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.812707 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8008736-31ec-491c-aa52-03b9413feab9" containerName="watcher-decision-engine" Feb 19 10:02:56 crc kubenswrapper[4873]: E0219 10:02:56.812734 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerName="watcher-applier" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.812740 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerName="watcher-applier" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.812897 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8008736-31ec-491c-aa52-03b9413feab9" containerName="watcher-decision-engine" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.812920 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" containerName="watcher-applier" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.813502 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.816433 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.841161 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.854503 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs\") pod \"95402218-fbbb-4453-aba6-d135ba3a26bd\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.854636 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle\") pod \"95402218-fbbb-4453-aba6-d135ba3a26bd\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.854696 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b9v6\" (UniqueName: \"kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6\") pod \"95402218-fbbb-4453-aba6-d135ba3a26bd\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.854759 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data\") pod \"95402218-fbbb-4453-aba6-d135ba3a26bd\" (UID: \"95402218-fbbb-4453-aba6-d135ba3a26bd\") " Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.854929 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs" (OuterVolumeSpecName: "logs") pod "95402218-fbbb-4453-aba6-d135ba3a26bd" (UID: "95402218-fbbb-4453-aba6-d135ba3a26bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.855301 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95402218-fbbb-4453-aba6-d135ba3a26bd-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.859164 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6" (OuterVolumeSpecName: "kube-api-access-4b9v6") pod "95402218-fbbb-4453-aba6-d135ba3a26bd" (UID: "95402218-fbbb-4453-aba6-d135ba3a26bd"). InnerVolumeSpecName "kube-api-access-4b9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.914609 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95402218-fbbb-4453-aba6-d135ba3a26bd" (UID: "95402218-fbbb-4453-aba6-d135ba3a26bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.960784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzrl\" (UniqueName: \"kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.961030 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.961160 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.961381 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.961542 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.963901 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.963928 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b9v6\" (UniqueName: \"kubernetes.io/projected/95402218-fbbb-4453-aba6-d135ba3a26bd-kube-api-access-4b9v6\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:56 crc kubenswrapper[4873]: I0219 10:02:56.981336 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data" (OuterVolumeSpecName: "config-data") pod "95402218-fbbb-4453-aba6-d135ba3a26bd" (UID: "95402218-fbbb-4453-aba6-d135ba3a26bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065298 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065435 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzrl\" (UniqueName: \"kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065472 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065513 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065572 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.065647 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95402218-fbbb-4453-aba6-d135ba3a26bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.066380 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.070228 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.070628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.070716 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.087095 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.087647 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzrl\" (UniqueName: \"kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl\") pod \"watcher-decision-engine-0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.089602 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api-log" containerID="cri-o://cd1c8470e0eda76a81d6e96f1fc492dbb2365558bb9cd6d4a1af5b28681c31e0" gracePeriod=30 Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.090010 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api" containerID="cri-o://63ad458896e287b047bf524f55c92f7c8b727110e2a9fed53b0acb11b2c99b24" gracePeriod=30 Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.134536 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.508058 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8008736-31ec-491c-aa52-03b9413feab9" path="/var/lib/kubelet/pods/b8008736-31ec-491c-aa52-03b9413feab9/volumes" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.509929 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerStarted","Data":"4ba322d3698975ce137f4a01e18bf101e9ef8127a707662eba0de8fcfb0c7ffe"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.527240 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95402218-fbbb-4453-aba6-d135ba3a26bd","Type":"ContainerDied","Data":"db54f1f84baadc90d4260a330bf2f720ca4cd24fbaeefa2f3e5f18f033d41844"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.527271 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.527746 4873 scope.go:117] "RemoveContainer" containerID="99300e7b247193a93a473310bd314f670d8dad13e2e93b1e553f7e8d446453ed" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.539993 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerStarted","Data":"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.547940 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fcd445c48-xvpw4" event={"ID":"ed86f09e-909d-451b-96c0-9b4b7b27eb03","Type":"ContainerStarted","Data":"69a430e9811337d1d7781794bfb96f803c68ee550b820556087a0414b2457040"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.547980 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fcd445c48-xvpw4" event={"ID":"ed86f09e-909d-451b-96c0-9b4b7b27eb03","Type":"ContainerStarted","Data":"f37425886de9ea059884eff8e220b6be7a28cb06584c590c19ebb7a83a84c7d0"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.549870 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.556632 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.556614243 podStartE2EDuration="7.556614243s" podCreationTimestamp="2026-02-19 10:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:57.545503397 +0000 UTC m=+1086.834935025" watchObservedRunningTime="2026-02-19 10:02:57.556614243 +0000 UTC m=+1086.846045881" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.581139 4873 generic.go:334] "Generic (PLEG): container finished" podID="943d069e-6ad4-4411-b937-c4499f0ced6f" containerID="e48e8c3f3cd0f5266c11f4e70aa13217161d56e20c834665683a06bd7308e111" exitCode=0 Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.581225 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4pv5z" event={"ID":"943d069e-6ad4-4411-b937-c4499f0ced6f","Type":"ContainerDied","Data":"e48e8c3f3cd0f5266c11f4e70aa13217161d56e20c834665683a06bd7308e111"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.590129 4873 generic.go:334] "Generic (PLEG): container finished" podID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerID="cd1c8470e0eda76a81d6e96f1fc492dbb2365558bb9cd6d4a1af5b28681c31e0" exitCode=143 Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.590320 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerDied","Data":"cd1c8470e0eda76a81d6e96f1fc492dbb2365558bb9cd6d4a1af5b28681c31e0"} Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.601004 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.600979603 podStartE2EDuration="7.600979603s" podCreationTimestamp="2026-02-19 10:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:57.578200808 +0000 UTC m=+1086.867632446" watchObservedRunningTime="2026-02-19 10:02:57.600979603 +0000 UTC m=+1086.890411241" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.612001 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5fcd445c48-xvpw4" podStartSLOduration=2.611983316 podStartE2EDuration="2.611983316s" podCreationTimestamp="2026-02-19 10:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:57.596508692 +0000 UTC m=+1086.885940330" watchObservedRunningTime="2026-02-19 10:02:57.611983316 +0000 UTC m=+1086.901414954" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.628360 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: E0219 10:02:57.643839 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95402218_fbbb_4453_aba6_d135ba3a26bd.slice/crio-db54f1f84baadc90d4260a330bf2f720ca4cd24fbaeefa2f3e5f18f033d41844\": RecentStats: unable to find data in memory cache]" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.651494 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.662037 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.663201 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.677219 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.681206 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.689083 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.785498 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-config-data\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.785625 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.785702 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8q8\" (UniqueName: \"kubernetes.io/projected/3d0e231c-7848-4f57-a28b-dfec3c87b617-kube-api-access-zz8q8\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.785750 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0e231c-7848-4f57-a28b-dfec3c87b617-logs\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.887256 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-config-data\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.887654 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.887736 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz8q8\" (UniqueName: \"kubernetes.io/projected/3d0e231c-7848-4f57-a28b-dfec3c87b617-kube-api-access-zz8q8\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.887773 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0e231c-7848-4f57-a28b-dfec3c87b617-logs\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.889030 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0e231c-7848-4f57-a28b-dfec3c87b617-logs\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.896422 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.897736 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0e231c-7848-4f57-a28b-dfec3c87b617-config-data\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.908343 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz8q8\" (UniqueName: \"kubernetes.io/projected/3d0e231c-7848-4f57-a28b-dfec3c87b617-kube-api-access-zz8q8\") pod \"watcher-applier-0\" (UID: \"3d0e231c-7848-4f57-a28b-dfec3c87b617\") " pod="openstack/watcher-applier-0" Feb 19 10:02:57 crc kubenswrapper[4873]: I0219 10:02:57.967954 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.090511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9t4k\" (UniqueName: \"kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k\") pod \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.090601 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts\") pod \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.090702 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs\") pod \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.090872 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data\") pod \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.090922 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle\") pod \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\" (UID: \"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.091477 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs" (OuterVolumeSpecName: "logs") pod "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" (UID: "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.095307 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts" (OuterVolumeSpecName: "scripts") pod "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" (UID: "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.096318 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k" (OuterVolumeSpecName: "kube-api-access-w9t4k") pod "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" (UID: "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6"). InnerVolumeSpecName "kube-api-access-w9t4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.118279 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.121491 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" (UID: "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.122064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data" (OuterVolumeSpecName: "config-data") pod "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" (UID: "ec5489a2-23e2-4875-a19b-d15b4ad6c8c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.193047 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.193377 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.193391 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9t4k\" (UniqueName: \"kubernetes.io/projected/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-kube-api-access-w9t4k\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.193402 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.194343 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.229731 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": read tcp 10.217.0.2:42954->10.217.0.163:9322: read: connection reset by peer" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.230034 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": read tcp 10.217.0.2:42950->10.217.0.163:9322: read: connection reset by peer" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.561351 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6696d67b98-wrvnm"] Feb 19 10:02:58 crc kubenswrapper[4873]: E0219 10:02:58.561790 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" containerName="placement-db-sync" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.561806 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" containerName="placement-db-sync" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.562022 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" containerName="placement-db-sync" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.563257 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.566632 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.566832 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.577720 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6696d67b98-wrvnm"] Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600561 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vtn\" (UniqueName: \"kubernetes.io/projected/c5d4dde9-793b-403e-8701-84cca6a509e1-kube-api-access-67vtn\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600614 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-scripts\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-internal-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600674 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-config-data\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600705 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-combined-ca-bundle\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600751 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d4dde9-793b-403e-8701-84cca6a509e1-logs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.600766 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-public-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.604495 4873 generic.go:334] "Generic (PLEG): container finished" podID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerID="63ad458896e287b047bf524f55c92f7c8b727110e2a9fed53b0acb11b2c99b24" exitCode=0 Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.604691 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerDied","Data":"63ad458896e287b047bf524f55c92f7c8b727110e2a9fed53b0acb11b2c99b24"} Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.635780 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerStarted","Data":"a62f2b1b9a301f2ebebd7bf5613870ec1c3fc6e4830ec22341716d60b02e765d"} Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.635820 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerStarted","Data":"6715424b51c6df78b1881817986335974e70067799bdff519c5527858f40bf0f"} Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.655600 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-98gbw" event={"ID":"ec5489a2-23e2-4875-a19b-d15b4ad6c8c6","Type":"ContainerDied","Data":"37b0ac3d48e8bec4044f6f8f22d9abb8c79cc58cedf7d1bbf6b0fb89fcc2a84a"} Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.655641 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b0ac3d48e8bec4044f6f8f22d9abb8c79cc58cedf7d1bbf6b0fb89fcc2a84a" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.655674 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-98gbw" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.656676 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.656659234 podStartE2EDuration="2.656659234s" podCreationTimestamp="2026-02-19 10:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:58.654377047 +0000 UTC m=+1087.943808685" watchObservedRunningTime="2026-02-19 10:02:58.656659234 +0000 UTC m=+1087.946090872" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.702954 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-scripts\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.711405 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-scripts\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.703295 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-internal-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.714630 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-config-data\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.714733 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-combined-ca-bundle\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.714774 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d4dde9-793b-403e-8701-84cca6a509e1-logs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.714790 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-public-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.715132 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vtn\" (UniqueName: \"kubernetes.io/projected/c5d4dde9-793b-403e-8701-84cca6a509e1-kube-api-access-67vtn\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.719803 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-internal-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.724362 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5d4dde9-793b-403e-8701-84cca6a509e1-logs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.725307 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-combined-ca-bundle\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.728247 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-public-tls-certs\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.737924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5d4dde9-793b-403e-8701-84cca6a509e1-config-data\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.744727 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vtn\" (UniqueName: \"kubernetes.io/projected/c5d4dde9-793b-403e-8701-84cca6a509e1-kube-api-access-67vtn\") pod \"placement-6696d67b98-wrvnm\" (UID: \"c5d4dde9-793b-403e-8701-84cca6a509e1\") " pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.772071 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.859274 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.888301 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.923923 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data\") pod \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.924017 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwmp\" (UniqueName: \"kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp\") pod \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.924039 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca\") pod \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.924187 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle\") pod \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.924299 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs\") pod \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\" (UID: \"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2\") " Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.925738 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs" (OuterVolumeSpecName: "logs") pod "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" (UID: "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4873]: I0219 10:02:58.950271 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp" (OuterVolumeSpecName: "kube-api-access-jrwmp") pod "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" (UID: "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2"). InnerVolumeSpecName "kube-api-access-jrwmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.026972 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwmp\" (UniqueName: \"kubernetes.io/projected/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-kube-api-access-jrwmp\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.027006 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.039324 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" (UID: "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.074248 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" (UID: "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.077333 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data" (OuterVolumeSpecName: "config-data") pod "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" (UID: "de2aeb33-bbce-4b15-a2c4-ce80764ef0c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.090544 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.127978 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78b5v\" (UniqueName: \"kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v\") pod \"943d069e-6ad4-4411-b937-c4499f0ced6f\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.128047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data\") pod \"943d069e-6ad4-4411-b937-c4499f0ced6f\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.128117 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle\") pod \"943d069e-6ad4-4411-b937-c4499f0ced6f\" (UID: \"943d069e-6ad4-4411-b937-c4499f0ced6f\") " Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.129320 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.129348 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.129360 4873 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.148760 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v" (OuterVolumeSpecName: "kube-api-access-78b5v") pod "943d069e-6ad4-4411-b937-c4499f0ced6f" (UID: "943d069e-6ad4-4411-b937-c4499f0ced6f"). InnerVolumeSpecName "kube-api-access-78b5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.149056 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "943d069e-6ad4-4411-b937-c4499f0ced6f" (UID: "943d069e-6ad4-4411-b937-c4499f0ced6f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.160549 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "943d069e-6ad4-4411-b937-c4499f0ced6f" (UID: "943d069e-6ad4-4411-b937-c4499f0ced6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.231236 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78b5v\" (UniqueName: \"kubernetes.io/projected/943d069e-6ad4-4411-b937-c4499f0ced6f-kube-api-access-78b5v\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.231264 4873 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.231273 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/943d069e-6ad4-4411-b937-c4499f0ced6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.451302 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6696d67b98-wrvnm"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.510722 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95402218-fbbb-4453-aba6-d135ba3a26bd" path="/var/lib/kubelet/pods/95402218-fbbb-4453-aba6-d135ba3a26bd/volumes" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.683707 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3d0e231c-7848-4f57-a28b-dfec3c87b617","Type":"ContainerStarted","Data":"b3d36f077efd4110d6b89432fd6d1d4edec15caa67228502bc5035f6a21b3517"} Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.683750 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"3d0e231c-7848-4f57-a28b-dfec3c87b617","Type":"ContainerStarted","Data":"f08d44f6d42fcc54a7c13ee85c8b5e428b7cfde82ddce8291971b4f378eefbec"} Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.688869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6696d67b98-wrvnm" event={"ID":"c5d4dde9-793b-403e-8701-84cca6a509e1","Type":"ContainerStarted","Data":"6dfe316e1d4a830fcd595802df839bbc29f0e446bb5bc6d014b82d92b16ddd5d"} Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.717663 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4pv5z" event={"ID":"943d069e-6ad4-4411-b937-c4499f0ced6f","Type":"ContainerDied","Data":"db07d7286194b278e2cec929f66edc47c3ebbe39668738c5d112ac9e99a6a103"} Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.717712 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db07d7286194b278e2cec929f66edc47c3ebbe39668738c5d112ac9e99a6a103" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.717790 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4pv5z" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.721163 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.7211395830000003 podStartE2EDuration="2.721139583s" podCreationTimestamp="2026-02-19 10:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:59.70612579 +0000 UTC m=+1088.995557438" watchObservedRunningTime="2026-02-19 10:02:59.721139583 +0000 UTC m=+1089.010571221" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.731636 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"de2aeb33-bbce-4b15-a2c4-ce80764ef0c2","Type":"ContainerDied","Data":"5c219368094a8a1c527a293923d373b84ac19d1c24f32f1ef5514bf9fa1b2437"} Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.731781 4873 scope.go:117] "RemoveContainer" containerID="63ad458896e287b047bf524f55c92f7c8b727110e2a9fed53b0acb11b2c99b24" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.732618 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.810207 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.823003 4873 scope.go:117] "RemoveContainer" containerID="cd1c8470e0eda76a81d6e96f1fc492dbb2365558bb9cd6d4a1af5b28681c31e0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.856225 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.878711 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:59 crc kubenswrapper[4873]: E0219 10:02:59.879143 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943d069e-6ad4-4411-b937-c4499f0ced6f" containerName="barbican-db-sync" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879159 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="943d069e-6ad4-4411-b937-c4499f0ced6f" containerName="barbican-db-sync" Feb 19 10:02:59 crc kubenswrapper[4873]: E0219 10:02:59.879171 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api-log" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879177 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api-log" Feb 19 10:02:59 crc kubenswrapper[4873]: E0219 10:02:59.879190 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879196 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879401 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="943d069e-6ad4-4411-b937-c4499f0ced6f" containerName="barbican-db-sync" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879421 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api-log" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.879441 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" containerName="watcher-api" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.880386 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.889692 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.890168 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.890327 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.906191 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-config-data\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961172 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961292 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961426 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnc4h\" (UniqueName: \"kubernetes.io/projected/9fb835f9-7ac4-4212-a372-b793c2fb8afd-kube-api-access-qnc4h\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961476 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961569 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.961612 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb835f9-7ac4-4212-a372-b793c2fb8afd-logs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.973384 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-667444df98-tdgw9"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.974839 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.980793 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.982415 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.982628 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t72rv" Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.994176 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-596d5556df-fx4q8"] Feb 19 10:02:59 crc kubenswrapper[4873]: I0219 10:02:59.996066 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.002902 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.015017 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-667444df98-tdgw9"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.051173 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-596d5556df-fx4q8"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063305 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data-custom\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063354 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063396 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6c6\" (UniqueName: \"kubernetes.io/projected/fc48b70c-5ab9-4765-a8cd-5985a3f63854-kube-api-access-nt6c6\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063430 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnc4h\" (UniqueName: \"kubernetes.io/projected/9fb835f9-7ac4-4212-a372-b793c2fb8afd-kube-api-access-qnc4h\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063453 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063469 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-combined-ca-bundle\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063495 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be5e1ee-a214-46ca-a5bf-d1d337848085-logs\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063510 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063523 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data-custom\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063552 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063577 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb835f9-7ac4-4212-a372-b793c2fb8afd-logs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063597 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-combined-ca-bundle\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063634 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc48b70c-5ab9-4765-a8cd-5985a3f63854-logs\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063653 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-config-data\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063668 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063692 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.063711 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqvf\" (UniqueName: \"kubernetes.io/projected/9be5e1ee-a214-46ca-a5bf-d1d337848085-kube-api-access-rlqvf\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.069410 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.071091 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.072720 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb835f9-7ac4-4212-a372-b793c2fb8afd-logs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.073161 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.087690 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.096738 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.097038 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.097298 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-config-data\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.100412 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnc4h\" (UniqueName: \"kubernetes.io/projected/9fb835f9-7ac4-4212-a372-b793c2fb8afd-kube-api-access-qnc4h\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.102002 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb835f9-7ac4-4212-a372-b793c2fb8afd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9fb835f9-7ac4-4212-a372-b793c2fb8afd\") " pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165606 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt6c6\" (UniqueName: \"kubernetes.io/projected/fc48b70c-5ab9-4765-a8cd-5985a3f63854-kube-api-access-nt6c6\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165680 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-combined-ca-bundle\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165711 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be5e1ee-a214-46ca-a5bf-d1d337848085-logs\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165727 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165743 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data-custom\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165774 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165823 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-combined-ca-bundle\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165841 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165865 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165887 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165902 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165931 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc48b70c-5ab9-4765-a8cd-5985a3f63854-logs\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165963 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.165984 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqvf\" (UniqueName: \"kubernetes.io/projected/9be5e1ee-a214-46ca-a5bf-d1d337848085-kube-api-access-rlqvf\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.166024 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data-custom\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.166075 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbv6s\" (UniqueName: \"kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.170242 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc48b70c-5ab9-4765-a8cd-5985a3f63854-logs\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.170474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be5e1ee-a214-46ca-a5bf-d1d337848085-logs\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.170914 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.172945 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.177022 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.177283 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.187694 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-config-data-custom\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.188186 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data-custom\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.191749 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt6c6\" (UniqueName: \"kubernetes.io/projected/fc48b70c-5ab9-4765-a8cd-5985a3f63854-kube-api-access-nt6c6\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.192466 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.192520 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be5e1ee-a214-46ca-a5bf-d1d337848085-combined-ca-bundle\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.196561 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-config-data\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.202955 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc48b70c-5ab9-4765-a8cd-5985a3f63854-combined-ca-bundle\") pod \"barbican-worker-596d5556df-fx4q8\" (UID: \"fc48b70c-5ab9-4765-a8cd-5985a3f63854\") " pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.219795 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqvf\" (UniqueName: \"kubernetes.io/projected/9be5e1ee-a214-46ca-a5bf-d1d337848085-kube-api-access-rlqvf\") pod \"barbican-keystone-listener-667444df98-tdgw9\" (UID: \"9be5e1ee-a214-46ca-a5bf-d1d337848085\") " pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.233284 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271570 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271629 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271654 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271673 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271695 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271710 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271728 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271745 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271776 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb7v\" (UniqueName: \"kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271841 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbv6s\" (UniqueName: \"kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.271907 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.272682 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.273585 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.274095 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.274398 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.274521 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.288610 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbv6s\" (UniqueName: \"kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s\") pod \"dnsmasq-dns-55dcd76767-7nrrt\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.351501 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.371467 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-596d5556df-fx4q8" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.376560 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.376603 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.376628 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.376646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.376676 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlb7v\" (UniqueName: \"kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.377928 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.380167 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.382838 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.382863 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.410696 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlb7v\" (UniqueName: \"kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v\") pod \"barbican-api-785b79c884-tswfl\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.414587 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.525119 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.736642 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.736996 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.745999 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.746047 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.799505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6696d67b98-wrvnm" event={"ID":"c5d4dde9-793b-403e-8701-84cca6a509e1","Type":"ContainerStarted","Data":"9dd376fab0cae7be53b1b672a4896b860045853ce1d0dafe4c84f50b5a3f2b10"} Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.817364 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.828869 4873 generic.go:334] "Generic (PLEG): container finished" podID="99868e3f-82d7-4f0c-9056-661e95486e6e" containerID="22b91ea45d57e3f8ed16da3e5a4058c15af39a6d914075c4521ba6755b03990b" exitCode=0 Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.829784 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf762" event={"ID":"99868e3f-82d7-4f0c-9056-661e95486e6e","Type":"ContainerDied","Data":"22b91ea45d57e3f8ed16da3e5a4058c15af39a6d914075c4521ba6755b03990b"} Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.843420 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.845590 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.885719 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.890300 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.934320 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 10:03:00 crc kubenswrapper[4873]: I0219 10:03:00.951821 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6687d9896d-v96j2" podUID="fa527f64-6e38-48c2-9927-a319f4579070" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.146700 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-667444df98-tdgw9"] Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.270240 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-596d5556df-fx4q8"] Feb 19 10:03:01 crc kubenswrapper[4873]: W0219 10:03:01.359360 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc48b70c_5ab9_4765_a8cd_5985a3f63854.slice/crio-5da214c6b3840e2d0ccfbd4713138b3b333fc0c9f0e6b77ec488c97edfdd49c3 WatchSource:0}: Error finding container 5da214c6b3840e2d0ccfbd4713138b3b333fc0c9f0e6b77ec488c97edfdd49c3: Status 404 returned error can't find the container with id 5da214c6b3840e2d0ccfbd4713138b3b333fc0c9f0e6b77ec488c97edfdd49c3 Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.368194 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.380196 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.513235 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2aeb33-bbce-4b15-a2c4-ce80764ef0c2" path="/var/lib/kubelet/pods/de2aeb33-bbce-4b15-a2c4-ce80764ef0c2/volumes" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.852273 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerStarted","Data":"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.852576 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerStarted","Data":"aa044ff7142e0c26ee94862c0e4c5ca488a9ed1c7a1ffa3af69735d62ea70cbd"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.855951 4873 generic.go:334] "Generic (PLEG): container finished" podID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerID="a00affcc69d0a0f0f0948ccec9e176ec543a48a936aa44d66436705028401e67" exitCode=0 Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.856014 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" event={"ID":"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9","Type":"ContainerDied","Data":"a00affcc69d0a0f0f0948ccec9e176ec543a48a936aa44d66436705028401e67"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.856039 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" event={"ID":"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9","Type":"ContainerStarted","Data":"b5551cf30b386908e2cde5ca7747852cdfecf30b5d0e1c2e9424decee5253a25"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.862006 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" event={"ID":"9be5e1ee-a214-46ca-a5bf-d1d337848085","Type":"ContainerStarted","Data":"fce4e5e0d318f754e2493e1c3317e9d3d11c50fd765ae2ce3c5cc6f56be7b7c2"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.869659 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9fb835f9-7ac4-4212-a372-b793c2fb8afd","Type":"ContainerStarted","Data":"6ebdf78bcbfc7429f77cd01f756b2ee54279512ab9259f219782422b0537ad75"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.869709 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9fb835f9-7ac4-4212-a372-b793c2fb8afd","Type":"ContainerStarted","Data":"048a19eaf4c1fb549e71b49a7d0f9f321fa31fb431a4095960c4f28e580369d4"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.869721 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9fb835f9-7ac4-4212-a372-b793c2fb8afd","Type":"ContainerStarted","Data":"dd06cf8fd74630da92b4e069bb8f9c8fd98bc8b40165ddd7c9e643a5377ce5e1"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.870227 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.871448 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9fb835f9-7ac4-4212-a372-b793c2fb8afd" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.174:9322/\": dial tcp 10.217.0.174:9322: connect: connection refused" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.879726 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-596d5556df-fx4q8" event={"ID":"fc48b70c-5ab9-4765-a8cd-5985a3f63854","Type":"ContainerStarted","Data":"5da214c6b3840e2d0ccfbd4713138b3b333fc0c9f0e6b77ec488c97edfdd49c3"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.885483 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerID="a62f2b1b9a301f2ebebd7bf5613870ec1c3fc6e4830ec22341716d60b02e765d" exitCode=1 Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.885543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerDied","Data":"a62f2b1b9a301f2ebebd7bf5613870ec1c3fc6e4830ec22341716d60b02e765d"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.886185 4873 scope.go:117] "RemoveContainer" containerID="a62f2b1b9a301f2ebebd7bf5613870ec1c3fc6e4830ec22341716d60b02e765d" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.895659 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gqrb5" event={"ID":"ce5accb4-1da0-4a21-a289-7dba33ad935f","Type":"ContainerStarted","Data":"2874d7c078f6aebe4e7f936700ecedd6916a0afc5a2e7ddcc365abe01b70926a"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.925848 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6696d67b98-wrvnm" event={"ID":"c5d4dde9-793b-403e-8701-84cca6a509e1","Type":"ContainerStarted","Data":"e5601faf67f3908d22845acd48383e8297ed5e25db35806c3ec713fb4493b2b6"} Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.925919 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.925938 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.925952 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.925966 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.926328 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.926632 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.958037 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.958018487 podStartE2EDuration="2.958018487s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:01.909924924 +0000 UTC m=+1091.199356552" watchObservedRunningTime="2026-02-19 10:03:01.958018487 +0000 UTC m=+1091.247450125" Feb 19 10:03:01 crc kubenswrapper[4873]: I0219 10:03:01.979599 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gqrb5" podStartSLOduration=5.057272507 podStartE2EDuration="51.979573202s" podCreationTimestamp="2026-02-19 10:02:10 +0000 UTC" firstStartedPulling="2026-02-19 10:02:12.727531649 +0000 UTC m=+1042.016963287" lastFinishedPulling="2026-02-19 10:02:59.649832344 +0000 UTC m=+1088.939263982" observedRunningTime="2026-02-19 10:03:01.929306695 +0000 UTC m=+1091.218738353" watchObservedRunningTime="2026-02-19 10:03:01.979573202 +0000 UTC m=+1091.269004840" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.008735 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6696d67b98-wrvnm" podStartSLOduration=4.008715535 podStartE2EDuration="4.008715535s" podCreationTimestamp="2026-02-19 10:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:01.96983478 +0000 UTC m=+1091.259266428" watchObservedRunningTime="2026-02-19 10:03:02.008715535 +0000 UTC m=+1091.298147173" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.563623 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf762" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.647783 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config\") pod \"99868e3f-82d7-4f0c-9056-661e95486e6e\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.647923 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48k6w\" (UniqueName: \"kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w\") pod \"99868e3f-82d7-4f0c-9056-661e95486e6e\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.647961 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle\") pod \"99868e3f-82d7-4f0c-9056-661e95486e6e\" (UID: \"99868e3f-82d7-4f0c-9056-661e95486e6e\") " Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.695745 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w" (OuterVolumeSpecName: "kube-api-access-48k6w") pod "99868e3f-82d7-4f0c-9056-661e95486e6e" (UID: "99868e3f-82d7-4f0c-9056-661e95486e6e"). InnerVolumeSpecName "kube-api-access-48k6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.746326 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config" (OuterVolumeSpecName: "config") pod "99868e3f-82d7-4f0c-9056-661e95486e6e" (UID: "99868e3f-82d7-4f0c-9056-661e95486e6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.751516 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.751548 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48k6w\" (UniqueName: \"kubernetes.io/projected/99868e3f-82d7-4f0c-9056-661e95486e6e-kube-api-access-48k6w\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.775295 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99868e3f-82d7-4f0c-9056-661e95486e6e" (UID: "99868e3f-82d7-4f0c-9056-661e95486e6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.853309 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99868e3f-82d7-4f0c-9056-661e95486e6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.940050 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerStarted","Data":"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18"} Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.940321 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.940339 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.943177 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" event={"ID":"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9","Type":"ContainerStarted","Data":"69b8a44f78afd834bc213e814516e741948268a9c9eaf6506fb2962a05c91488"} Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.943300 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.945763 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vf762" event={"ID":"99868e3f-82d7-4f0c-9056-661e95486e6e","Type":"ContainerDied","Data":"51130049c4f72c45b52b368bcf10130af9e763c98f2e5fc842a0ae20064148f7"} Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.945784 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vf762" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.945794 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51130049c4f72c45b52b368bcf10130af9e763c98f2e5fc842a0ae20064148f7" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.956827 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-785b79c884-tswfl" podStartSLOduration=2.956810307 podStartE2EDuration="2.956810307s" podCreationTimestamp="2026-02-19 10:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:02.955548765 +0000 UTC m=+1092.244980403" watchObservedRunningTime="2026-02-19 10:03:02.956810307 +0000 UTC m=+1092.246241945" Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.957358 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerStarted","Data":"5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0"} Feb 19 10:03:02 crc kubenswrapper[4873]: I0219 10:03:02.989827 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" podStartSLOduration=3.9898022749999997 podStartE2EDuration="3.989802275s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:02.976333011 +0000 UTC m=+1092.265764649" watchObservedRunningTime="2026-02-19 10:03:02.989802275 +0000 UTC m=+1092.279233913" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.118767 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.236765 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.329470 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:03 crc kubenswrapper[4873]: E0219 10:03:03.329881 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99868e3f-82d7-4f0c-9056-661e95486e6e" containerName="neutron-db-sync" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.329892 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="99868e3f-82d7-4f0c-9056-661e95486e6e" containerName="neutron-db-sync" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.330067 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="99868e3f-82d7-4f0c-9056-661e95486e6e" containerName="neutron-db-sync" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.331048 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.338721 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.450186 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.451713 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.461439 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c4d59d6dd-4nh9w"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.463074 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: W0219 10:03:03.469136 4873 reflector.go:561] object-"openstack"/"cert-barbican-internal-svc": failed to list *v1.Secret: secrets "cert-barbican-internal-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 10:03:03 crc kubenswrapper[4873]: E0219 10:03:03.469181 4873 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-barbican-internal-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 10:03:03 crc kubenswrapper[4873]: W0219 10:03:03.471135 4873 reflector.go:561] object-"openstack"/"cert-barbican-public-svc": failed to list *v1.Secret: secrets "cert-barbican-public-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 10:03:03 crc kubenswrapper[4873]: E0219 10:03:03.471191 4873 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-barbican-public-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.473547 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pk4jm" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.473778 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.473906 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.474016 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480462 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccds\" (UniqueName: \"kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480607 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480696 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480723 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.480822 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.546059 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586579 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sp8t\" (UniqueName: \"kubernetes.io/projected/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-kube-api-access-7sp8t\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586658 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586680 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586696 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586717 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-logs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586730 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-combined-ca-bundle\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586801 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data-custom\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586868 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586889 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccds\" (UniqueName: \"kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586918 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586939 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.586953 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkk7t\" (UniqueName: \"kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.587022 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.587042 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.587061 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.587857 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.587976 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.588716 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.589070 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.589731 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.590480 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c4d59d6dd-4nh9w"] Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.673069 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccds\" (UniqueName: \"kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds\") pod \"dnsmasq-dns-697b559f65-2zvb5\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.679923 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.689350 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data-custom\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.689608 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.689713 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.689813 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkk7t\" (UniqueName: \"kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.689982 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690063 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690164 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sp8t\" (UniqueName: \"kubernetes.io/projected/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-kube-api-access-7sp8t\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690268 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690339 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690421 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-logs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690492 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-combined-ca-bundle\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.690586 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.693596 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-logs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.700575 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.704784 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data-custom\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.705328 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.705876 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.706718 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.725569 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-config-data\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.736846 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkk7t\" (UniqueName: \"kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t\") pod \"neutron-749b6895f6-pmvtl\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.738843 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sp8t\" (UniqueName: \"kubernetes.io/projected/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-kube-api-access-7sp8t\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.750333 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-combined-ca-bundle\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:03 crc kubenswrapper[4873]: I0219 10:03:03.802915 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:04 crc kubenswrapper[4873]: E0219 10:03:04.691186 4873 secret.go:188] Couldn't get secret openstack/cert-barbican-public-svc: failed to sync secret cache: timed out waiting for the condition Feb 19 10:03:04 crc kubenswrapper[4873]: E0219 10:03:04.691529 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs podName:76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3 nodeName:}" failed. No retries permitted until 2026-02-19 10:03:05.191510942 +0000 UTC m=+1094.480942580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs") pod "barbican-api-c4d59d6dd-4nh9w" (UID: "76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3") : failed to sync secret cache: timed out waiting for the condition Feb 19 10:03:04 crc kubenswrapper[4873]: E0219 10:03:04.691765 4873 secret.go:188] Couldn't get secret openstack/cert-barbican-internal-svc: failed to sync secret cache: timed out waiting for the condition Feb 19 10:03:04 crc kubenswrapper[4873]: E0219 10:03:04.691789 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs podName:76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3 nodeName:}" failed. No retries permitted until 2026-02-19 10:03:05.191782909 +0000 UTC m=+1094.481214537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs") pod "barbican-api-c4d59d6dd-4nh9w" (UID: "76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3") : failed to sync secret cache: timed out waiting for the condition Feb 19 10:03:04 crc kubenswrapper[4873]: I0219 10:03:04.744471 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 10:03:04 crc kubenswrapper[4873]: I0219 10:03:04.790931 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 10:03:04 crc kubenswrapper[4873]: I0219 10:03:04.989439 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="dnsmasq-dns" containerID="cri-o://69b8a44f78afd834bc213e814516e741948268a9c9eaf6506fb2962a05c91488" gracePeriod=10 Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.146878 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.147231 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.179253 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.179365 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.233210 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.233324 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.234160 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.238628 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-public-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.245692 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3-internal-tls-certs\") pod \"barbican-api-c4d59d6dd-4nh9w\" (UID: \"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3\") " pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.317213 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.481249 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:03:05 crc kubenswrapper[4873]: I0219 10:03:05.879461 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.018066 4873 generic.go:334] "Generic (PLEG): container finished" podID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerID="69b8a44f78afd834bc213e814516e741948268a9c9eaf6506fb2962a05c91488" exitCode=0 Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.019251 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" event={"ID":"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9","Type":"ContainerDied","Data":"69b8a44f78afd834bc213e814516e741948268a9c9eaf6506fb2962a05c91488"} Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.315065 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76cc4fb9fc-vdfd4"] Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.317630 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.329501 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.329720 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.341759 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76cc4fb9fc-vdfd4"] Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359609 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-public-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359660 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-ovndb-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359732 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-internal-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359797 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-combined-ca-bundle\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359846 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-httpd-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359873 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.359917 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r2n\" (UniqueName: \"kubernetes.io/projected/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-kube-api-access-x2r2n\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.461525 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-public-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.461900 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-ovndb-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.461978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-internal-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.462043 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-combined-ca-bundle\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.462089 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-httpd-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.462138 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.462187 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2r2n\" (UniqueName: \"kubernetes.io/projected/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-kube-api-access-x2r2n\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.469205 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.473888 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-public-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.473882 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-internal-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.473945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-ovndb-tls-certs\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.525753 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-httpd-config\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.525840 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-combined-ca-bundle\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.530807 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2r2n\" (UniqueName: \"kubernetes.io/projected/f168d086-aaa7-4a6e-9a65-5ab28e10a7e8-kube-api-access-x2r2n\") pod \"neutron-76cc4fb9fc-vdfd4\" (UID: \"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8\") " pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.615720 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 10:03:06 crc kubenswrapper[4873]: I0219 10:03:06.677241 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.031020 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerID="5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0" exitCode=1 Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.031058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerDied","Data":"5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0"} Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.031089 4873 scope.go:117] "RemoveContainer" containerID="a62f2b1b9a301f2ebebd7bf5613870ec1c3fc6e4830ec22341716d60b02e765d" Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.031674 4873 scope.go:117] "RemoveContainer" containerID="5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0" Feb 19 10:03:07 crc kubenswrapper[4873]: E0219 10:03:07.031862 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ab7f7779-d6dd-4844-8af5-83ade972d9d0)\"" pod="openstack/watcher-decision-engine-0" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.135674 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:07 crc kubenswrapper[4873]: I0219 10:03:07.135730 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:08 crc kubenswrapper[4873]: I0219 10:03:08.040266 4873 scope.go:117] "RemoveContainer" containerID="5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0" Feb 19 10:03:08 crc kubenswrapper[4873]: E0219 10:03:08.040745 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ab7f7779-d6dd-4844-8af5-83ade972d9d0)\"" pod="openstack/watcher-decision-engine-0" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" Feb 19 10:03:08 crc kubenswrapper[4873]: I0219 10:03:08.119148 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 10:03:08 crc kubenswrapper[4873]: I0219 10:03:08.145210 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 10:03:09 crc kubenswrapper[4873]: I0219 10:03:09.083009 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.065719 4873 generic.go:334] "Generic (PLEG): container finished" podID="ce5accb4-1da0-4a21-a289-7dba33ad935f" containerID="2874d7c078f6aebe4e7f936700ecedd6916a0afc5a2e7ddcc365abe01b70926a" exitCode=0 Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.066671 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gqrb5" event={"ID":"ce5accb4-1da0-4a21-a289-7dba33ad935f","Type":"ContainerDied","Data":"2874d7c078f6aebe4e7f936700ecedd6916a0afc5a2e7ddcc365abe01b70926a"} Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.234451 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.246901 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.900672 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.965796 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.965870 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbv6s\" (UniqueName: \"kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.965971 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.966131 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.966161 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:10 crc kubenswrapper[4873]: I0219 10:03:10.966240 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0\") pod \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\" (UID: \"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9\") " Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.000976 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s" (OuterVolumeSpecName: "kube-api-access-sbv6s") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "kube-api-access-sbv6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.034511 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.060613 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.068383 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.068437 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.068450 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbv6s\" (UniqueName: \"kubernetes.io/projected/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-kube-api-access-sbv6s\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.093131 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.122933 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" event={"ID":"a0e5418f-aef3-4aba-ba9b-f4e515fb18c9","Type":"ContainerDied","Data":"b5551cf30b386908e2cde5ca7747852cdfecf30b5d0e1c2e9424decee5253a25"} Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.122999 4873 scope.go:117] "RemoveContainer" containerID="69b8a44f78afd834bc213e814516e741948268a9c9eaf6506fb2962a05c91488" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.123243 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.146615 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.147712 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config" (OuterVolumeSpecName: "config") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.155342 4873 scope.go:117] "RemoveContainer" containerID="a00affcc69d0a0f0f0948ccec9e176ec543a48a936aa44d66436705028401e67" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.177006 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.177022 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.220309 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" (UID: "a0e5418f-aef3-4aba-ba9b-f4e515fb18c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.281994 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:11 crc kubenswrapper[4873]: E0219 10:03:11.300418 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.376764 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.477906 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.529038 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55dcd76767-7nrrt"] Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.529087 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.639740 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76cc4fb9fc-vdfd4"] Feb 19 10:03:11 crc kubenswrapper[4873]: I0219 10:03:11.717925 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c4d59d6dd-4nh9w"] Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.156773 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" event={"ID":"9be5e1ee-a214-46ca-a5bf-d1d337848085","Type":"ContainerStarted","Data":"c6341001b5e009d25f87fdac6fd541731999dddfee636490d004349659a0895c"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.156826 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" event={"ID":"9be5e1ee-a214-46ca-a5bf-d1d337848085","Type":"ContainerStarted","Data":"cf7d70a01a1d05e9c05ea5d0d7ba3fe3e00df168e2fb9b29aef913e596b5166e"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.162218 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" event={"ID":"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8","Type":"ContainerStarted","Data":"1b2761c8825453002d934e520c1c7afafdbc5067bc5158f1412768c1d6a606c1"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.179961 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cc4fb9fc-vdfd4" event={"ID":"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8","Type":"ContainerStarted","Data":"851d40a64ed5633a008606bc0f8f8d6dd768aa087b79b57994c68e881d2af84b"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.187485 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-596d5556df-fx4q8" event={"ID":"fc48b70c-5ab9-4765-a8cd-5985a3f63854","Type":"ContainerStarted","Data":"f853980fd3085a7656b173299c504a1ba631f4a032f6c9b41a6889decade7904"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.187528 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-596d5556df-fx4q8" event={"ID":"fc48b70c-5ab9-4765-a8cd-5985a3f63854","Type":"ContainerStarted","Data":"4b5d0d56217e2986147a41ce27c642e2f8b06114c0d3a3036bab7c5b0cc976e1"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.199328 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-667444df98-tdgw9" podStartSLOduration=3.57806258 podStartE2EDuration="13.199298393s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="2026-02-19 10:03:01.173856583 +0000 UTC m=+1090.463288221" lastFinishedPulling="2026-02-19 10:03:10.795092396 +0000 UTC m=+1100.084524034" observedRunningTime="2026-02-19 10:03:12.180414084 +0000 UTC m=+1101.469845722" watchObservedRunningTime="2026-02-19 10:03:12.199298393 +0000 UTC m=+1101.488730031" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.209782 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gqrb5" event={"ID":"ce5accb4-1da0-4a21-a289-7dba33ad935f","Type":"ContainerDied","Data":"7091928b68df42cd9ae5c284cfdb9622dc758710a4af850abe1bece12bfc74a3"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.209834 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7091928b68df42cd9ae5c284cfdb9622dc758710a4af850abe1bece12bfc74a3" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.221222 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.225777 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-596d5556df-fx4q8" podStartSLOduration=3.850481668 podStartE2EDuration="13.225757859s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="2026-02-19 10:03:01.403625953 +0000 UTC m=+1090.693057591" lastFinishedPulling="2026-02-19 10:03:10.778902144 +0000 UTC m=+1100.068333782" observedRunningTime="2026-02-19 10:03:12.21528926 +0000 UTC m=+1101.504720898" watchObservedRunningTime="2026-02-19 10:03:12.225757859 +0000 UTC m=+1101.515189497" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.228054 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerStarted","Data":"c2fc7030796f36afab7f9dbbf523310f26e22db5ef1e487e914036c0ac971b2a"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.228095 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerStarted","Data":"936d413c07d2e70cda379bc1d9e56c3d69a0e75d48e2a897c8fb38cdf7c08e5e"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.229661 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c4d59d6dd-4nh9w" event={"ID":"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3","Type":"ContainerStarted","Data":"0a2ecaf652ead9d88e28513a7ab548a70faada0feffc370d3043bb7d333bce02"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.248373 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerStarted","Data":"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10"} Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.248465 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="ceilometer-notification-agent" containerID="cri-o://ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e" gracePeriod=30 Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.248649 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="proxy-httpd" containerID="cri-o://bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10" gracePeriod=30 Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.248709 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="sg-core" containerID="cri-o://c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1" gracePeriod=30 Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.258715 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.305925 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.305999 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.306724 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ttv\" (UniqueName: \"kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.306810 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.306887 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.306997 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.307163 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data\") pod \"ce5accb4-1da0-4a21-a289-7dba33ad935f\" (UID: \"ce5accb4-1da0-4a21-a289-7dba33ad935f\") " Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.308274 4873 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5accb4-1da0-4a21-a289-7dba33ad935f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.313230 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.313848 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv" (OuterVolumeSpecName: "kube-api-access-k7ttv") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "kube-api-access-k7ttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.317131 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts" (OuterVolumeSpecName: "scripts") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.358271 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.411018 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.411056 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ttv\" (UniqueName: \"kubernetes.io/projected/ce5accb4-1da0-4a21-a289-7dba33ad935f-kube-api-access-k7ttv\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.411069 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.411081 4873 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.501802 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data" (OuterVolumeSpecName: "config-data") pod "ce5accb4-1da0-4a21-a289-7dba33ad935f" (UID: "ce5accb4-1da0-4a21-a289-7dba33ad935f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.513002 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5accb4-1da0-4a21-a289-7dba33ad935f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.760422 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:12 crc kubenswrapper[4873]: I0219 10:03:12.955657 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.275487 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerStarted","Data":"a79fc5280241e2b1290e7c529d75e24bfc2e34924a4a60f5635bd08c8d066317"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.276209 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.277708 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c4d59d6dd-4nh9w" event={"ID":"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3","Type":"ContainerStarted","Data":"961603804fece1276cbfe9325147ce80b3818f8bf8b93dd68ae6303709236cd6"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.277734 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c4d59d6dd-4nh9w" event={"ID":"76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3","Type":"ContainerStarted","Data":"894709ffcd09bfb9a507c96057667b335575a46f4c6b295398aad79b687e8c39"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.278133 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.278157 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.288239 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerID="bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10" exitCode=0 Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.288266 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerID="c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1" exitCode=2 Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.288306 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerDied","Data":"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.288330 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerDied","Data":"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.290295 4873 generic.go:334] "Generic (PLEG): container finished" podID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerID="2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f" exitCode=0 Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.290329 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" event={"ID":"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8","Type":"ContainerDied","Data":"2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.295543 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gqrb5" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.296899 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cc4fb9fc-vdfd4" event={"ID":"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8","Type":"ContainerStarted","Data":"03af86cb9a03a5748c5d591d8dc080c065aa41860d02b9f72a77071d3d585291"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.296936 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.296946 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76cc4fb9fc-vdfd4" event={"ID":"f168d086-aaa7-4a6e-9a65-5ab28e10a7e8","Type":"ContainerStarted","Data":"0c53089e0bb793cea82cf9543a9eb6219fa6e73cb39ca5491d0613d4928c1fa5"} Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.333378 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-749b6895f6-pmvtl" podStartSLOduration=10.333364314 podStartE2EDuration="10.333364314s" podCreationTimestamp="2026-02-19 10:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:13.300211555 +0000 UTC m=+1102.589643193" watchObservedRunningTime="2026-02-19 10:03:13.333364314 +0000 UTC m=+1102.622795952" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.349703 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76cc4fb9fc-vdfd4" podStartSLOduration=7.349686382 podStartE2EDuration="7.349686382s" podCreationTimestamp="2026-02-19 10:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:13.332444121 +0000 UTC m=+1102.621875789" watchObservedRunningTime="2026-02-19 10:03:13.349686382 +0000 UTC m=+1102.639118020" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.439392 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c4d59d6dd-4nh9w" podStartSLOduration=10.439363685 podStartE2EDuration="10.439363685s" podCreationTimestamp="2026-02-19 10:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:13.391118098 +0000 UTC m=+1102.680549736" watchObservedRunningTime="2026-02-19 10:03:13.439363685 +0000 UTC m=+1102.728795353" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.516036 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" path="/var/lib/kubelet/pods/a0e5418f-aef3-4aba-ba9b-f4e515fb18c9/volumes" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.584374 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:13 crc kubenswrapper[4873]: E0219 10:03:13.585299 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="dnsmasq-dns" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.585405 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="dnsmasq-dns" Feb 19 10:03:13 crc kubenswrapper[4873]: E0219 10:03:13.585518 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" containerName="cinder-db-sync" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.585613 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" containerName="cinder-db-sync" Feb 19 10:03:13 crc kubenswrapper[4873]: E0219 10:03:13.585706 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="init" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.585773 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="init" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.586034 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" containerName="cinder-db-sync" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.586154 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="dnsmasq-dns" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.587478 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.595017 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.595202 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.595310 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tmcc9" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.595435 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.641936 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.769984 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.770363 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.770624 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.772333 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94s6t\" (UniqueName: \"kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.772548 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.772677 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.776743 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.812025 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879055 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94s6t\" (UniqueName: \"kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879149 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879192 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879586 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879625 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.879759 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.889303 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.896862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.905010 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.910910 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.911046 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.922752 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.940478 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.950808 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94s6t\" (UniqueName: \"kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t\") pod \"cinder-scheduler-0\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984005 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984074 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984162 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984190 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984210 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhw4\" (UniqueName: \"kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:13 crc kubenswrapper[4873]: I0219 10:03:13.984297 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.006140 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.008422 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.014589 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.017386 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.054418 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092260 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092429 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092469 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092504 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092538 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092558 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhw4\" (UniqueName: \"kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092590 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhrt\" (UniqueName: \"kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092719 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092736 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092762 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092805 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.092979 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.093013 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.107051 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.110057 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.113705 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.114672 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.122729 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.137948 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhw4\" (UniqueName: \"kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4\") pod \"dnsmasq-dns-8656fdbcc7-6lw5c\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.166145 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194210 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194307 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194326 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194349 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhrt\" (UniqueName: \"kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194414 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.194430 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.195756 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.196663 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.202215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.202541 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.203572 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.204177 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.213758 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhrt\" (UniqueName: \"kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt\") pod \"cinder-api-0\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.240308 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.334314 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="dnsmasq-dns" containerID="cri-o://f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20" gracePeriod=10 Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.334613 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" event={"ID":"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8","Type":"ContainerStarted","Data":"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20"} Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.335720 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.345827 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.367861 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.375976 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" podStartSLOduration=11.375955416 podStartE2EDuration="11.375955416s" podCreationTimestamp="2026-02-19 10:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:14.367218758 +0000 UTC m=+1103.656650406" watchObservedRunningTime="2026-02-19 10:03:14.375955416 +0000 UTC m=+1103.665387044" Feb 19 10:03:14 crc kubenswrapper[4873]: I0219 10:03:14.728970 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:14 crc kubenswrapper[4873]: W0219 10:03:14.756766 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a649f7b_88cb_4b43_bc71_06ab3237f955.slice/crio-d01b29d0a705eb2d4f63512fcea22d2e1a868ce32ce3cff9e21615cd03cc670a WatchSource:0}: Error finding container d01b29d0a705eb2d4f63512fcea22d2e1a868ce32ce3cff9e21615cd03cc670a: Status 404 returned error can't find the container with id d01b29d0a705eb2d4f63512fcea22d2e1a868ce32ce3cff9e21615cd03cc670a Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.220685 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.238791 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.270861 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.308309 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337179 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337234 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337305 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337336 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337364 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337440 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337518 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337546 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337593 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337634 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337654 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337704 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccds\" (UniqueName: \"kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds\") pod \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\" (UID: \"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.337722 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xxqg\" (UniqueName: \"kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg\") pod \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\" (UID: \"ab448dfd-a67c-49b5-a153-92a5a6f504b2\") " Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.342775 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.343073 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.350430 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts" (OuterVolumeSpecName: "scripts") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.372754 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" event={"ID":"e78542dc-01da-47dc-aec5-a380b7484425","Type":"ContainerStarted","Data":"c7ffc8e18883ae90270b9d4c0dcb813698f920dfda3376430f83575ac81ce7b9"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.372832 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg" (OuterVolumeSpecName: "kube-api-access-4xxqg") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "kube-api-access-4xxqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.372923 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds" (OuterVolumeSpecName: "kube-api-access-sccds") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "kube-api-access-sccds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.379678 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerID="ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e" exitCode=0 Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.379724 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerDied","Data":"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.379744 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab448dfd-a67c-49b5-a153-92a5a6f504b2","Type":"ContainerDied","Data":"c341b58fa66a9c7c1455f8e33fdfb22dd5f6b0a9b06cdb661264c78977069ea2"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.379761 4873 scope.go:117] "RemoveContainer" containerID="bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.379878 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.401900 4873 generic.go:334] "Generic (PLEG): container finished" podID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerID="f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20" exitCode=0 Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.401966 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.401983 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" event={"ID":"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8","Type":"ContainerDied","Data":"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.402016 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b559f65-2zvb5" event={"ID":"6dfe8fe2-4637-46d9-b3bc-689e510c6ec8","Type":"ContainerDied","Data":"1b2761c8825453002d934e520c1c7afafdbc5067bc5158f1412768c1d6a606c1"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.419035 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55dcd76767-7nrrt" podUID="a0e5418f-aef3-4aba-ba9b-f4e515fb18c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: i/o timeout" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.421717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerStarted","Data":"8e464848b3ebc3175565441b190b29e16936e4f1ed928d10cd26c6f756af71c1"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.430090 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443436 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443462 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccds\" (UniqueName: \"kubernetes.io/projected/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-kube-api-access-sccds\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443474 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xxqg\" (UniqueName: \"kubernetes.io/projected/ab448dfd-a67c-49b5-a153-92a5a6f504b2-kube-api-access-4xxqg\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443482 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443490 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab448dfd-a67c-49b5-a153-92a5a6f504b2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.443500 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.453913 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.466705 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.471169 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerStarted","Data":"d01b29d0a705eb2d4f63512fcea22d2e1a868ce32ce3cff9e21615cd03cc670a"} Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.480949 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.486143 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.487814 4873 scope.go:117] "RemoveContainer" containerID="c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.498704 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config" (OuterVolumeSpecName: "config") pod "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" (UID: "6dfe8fe2-4637-46d9-b3bc-689e510c6ec8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.518448 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.529864 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data" (OuterVolumeSpecName: "config-data") pod "ab448dfd-a67c-49b5-a153-92a5a6f504b2" (UID: "ab448dfd-a67c-49b5-a153-92a5a6f504b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.530357 4873 scope.go:117] "RemoveContainer" containerID="ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545377 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545400 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545410 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545419 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545427 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545436 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.545443 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab448dfd-a67c-49b5-a153-92a5a6f504b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.596096 4873 scope.go:117] "RemoveContainer" containerID="bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.596569 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10\": container with ID starting with bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10 not found: ID does not exist" containerID="bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.596617 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10"} err="failed to get container status \"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10\": rpc error: code = NotFound desc = could not find container \"bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10\": container with ID starting with bd797254af7d727ccef55cdbb9c374846a2546bce00a686191d6ba6af2767e10 not found: ID does not exist" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.596646 4873 scope.go:117] "RemoveContainer" containerID="c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.597386 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1\": container with ID starting with c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1 not found: ID does not exist" containerID="c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.597419 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1"} err="failed to get container status \"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1\": rpc error: code = NotFound desc = could not find container \"c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1\": container with ID starting with c3a492f4bbf32fe7c67e7763c6c5275b1ff0f9bdb61d830125564dd30c92b1f1 not found: ID does not exist" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.597447 4873 scope.go:117] "RemoveContainer" containerID="ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.597883 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e\": container with ID starting with ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e not found: ID does not exist" containerID="ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.597920 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e"} err="failed to get container status \"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e\": rpc error: code = NotFound desc = could not find container \"ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e\": container with ID starting with ad6c5766518ab006754a3a4a7d650831c78a948d573045b4e146b11a48e7017e not found: ID does not exist" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.597967 4873 scope.go:117] "RemoveContainer" containerID="f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.642883 4873 scope.go:117] "RemoveContainer" containerID="2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.824780 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.831077 4873 scope.go:117] "RemoveContainer" containerID="f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.837663 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20\": container with ID starting with f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20 not found: ID does not exist" containerID="f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.837704 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20"} err="failed to get container status \"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20\": rpc error: code = NotFound desc = could not find container \"f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20\": container with ID starting with f456921ecb313aa05a7e9f52e2901ff2c55430f10241666557c3053d87edce20 not found: ID does not exist" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.837731 4873 scope.go:117] "RemoveContainer" containerID="2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.839222 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f\": container with ID starting with 2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f not found: ID does not exist" containerID="2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.839256 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f"} err="failed to get container status \"2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f\": rpc error: code = NotFound desc = could not find container \"2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f\": container with ID starting with 2e532231fb9e05c5e12daaa7682ad61edb1cc2ae1641329effcd1ef595a0472f not found: ID does not exist" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.842445 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.869903 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.890211 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697b559f65-2zvb5"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.946148 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.948532 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="init" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.948566 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="init" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.948585 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="dnsmasq-dns" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.948595 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="dnsmasq-dns" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.948617 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="ceilometer-notification-agent" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.948627 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="ceilometer-notification-agent" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.948659 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="sg-core" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.948669 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="sg-core" Feb 19 10:03:15 crc kubenswrapper[4873]: E0219 10:03:15.948682 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="proxy-httpd" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.948689 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="proxy-httpd" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.951070 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="proxy-httpd" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.951181 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="ceilometer-notification-agent" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.951221 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" containerName="dnsmasq-dns" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.951247 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" containerName="sg-core" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.976427 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.976881 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.982479 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:03:15 crc kubenswrapper[4873]: I0219 10:03:15.982860 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082423 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082458 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082487 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082539 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082565 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.082583 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxxwz\" (UniqueName: \"kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184162 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184238 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184256 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxxwz\" (UniqueName: \"kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184331 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184399 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184446 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.184478 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.185904 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.186219 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.191762 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.192081 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.193613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.199024 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.213749 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxxwz\" (UniqueName: \"kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz\") pod \"ceilometer-0\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.330190 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.516618 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerStarted","Data":"f05593f5088b36a20866dde0c189f6365ca5bb5d444303ca92f1e7f75e70ca2f"} Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.518338 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerStarted","Data":"7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783"} Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.526628 4873 generic.go:334] "Generic (PLEG): container finished" podID="e78542dc-01da-47dc-aec5-a380b7484425" containerID="20fe864189fb33810eb3acc7dc0b89314091b0776fb2a2bfe18804bc13374185" exitCode=0 Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.526714 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" event={"ID":"e78542dc-01da-47dc-aec5-a380b7484425","Type":"ContainerDied","Data":"20fe864189fb33810eb3acc7dc0b89314091b0776fb2a2bfe18804bc13374185"} Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.595465 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.766666 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6687d9896d-v96j2" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.856228 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.856459 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon-log" containerID="cri-o://1a1b6f4ba694daddb17f029a0bbce06c79e8294e69f096dade9d91ac98c03f81" gracePeriod=30 Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.856835 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" containerID="cri-o://f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6" gracePeriod=30 Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.870270 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 19 10:03:16 crc kubenswrapper[4873]: I0219 10:03:16.899361 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.494836 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dfe8fe2-4637-46d9-b3bc-689e510c6ec8" path="/var/lib/kubelet/pods/6dfe8fe2-4637-46d9-b3bc-689e510c6ec8/volumes" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.497544 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab448dfd-a67c-49b5-a153-92a5a6f504b2" path="/var/lib/kubelet/pods/ab448dfd-a67c-49b5-a153-92a5a6f504b2/volumes" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.554569 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerStarted","Data":"be36025eab8065799686eea6527ff5c2140eda51908e0f229e70d6b1bf945a72"} Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.557309 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerStarted","Data":"bde159d1e713bedb9f809d72fb731177094209d5eae3fafbca00c3bf92dd0edf"} Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.557347 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerStarted","Data":"a72128a70548416ae211c60013a87319728fca02cd7888fa60778dec8ba63ea4"} Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.559361 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" event={"ID":"e78542dc-01da-47dc-aec5-a380b7484425","Type":"ContainerStarted","Data":"89333ead7926aa2618b4a528f8056c909774aca95003b189eba1f9e1ae277295"} Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.559511 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.561294 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerStarted","Data":"d65db798423e51c2a9f8d6a3012c9ccc857964209249f5fc3748a28883833968"} Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.561397 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api-log" containerID="cri-o://f05593f5088b36a20866dde0c189f6365ca5bb5d444303ca92f1e7f75e70ca2f" gracePeriod=30 Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.561432 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.561468 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api" containerID="cri-o://d65db798423e51c2a9f8d6a3012c9ccc857964209249f5fc3748a28883833968" gracePeriod=30 Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.590175 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.2054308559999996 podStartE2EDuration="4.590156927s" podCreationTimestamp="2026-02-19 10:03:13 +0000 UTC" firstStartedPulling="2026-02-19 10:03:14.76318862 +0000 UTC m=+1104.052620258" lastFinishedPulling="2026-02-19 10:03:15.147914691 +0000 UTC m=+1104.437346329" observedRunningTime="2026-02-19 10:03:17.581029959 +0000 UTC m=+1106.870461597" watchObservedRunningTime="2026-02-19 10:03:17.590156927 +0000 UTC m=+1106.879588565" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.616035 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" podStartSLOduration=4.616016764 podStartE2EDuration="4.616016764s" podCreationTimestamp="2026-02-19 10:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:17.610520077 +0000 UTC m=+1106.899951715" watchObservedRunningTime="2026-02-19 10:03:17.616016764 +0000 UTC m=+1106.905448402" Feb 19 10:03:17 crc kubenswrapper[4873]: I0219 10:03:17.644450 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.644430245 podStartE2EDuration="4.644430245s" podCreationTimestamp="2026-02-19 10:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:17.639747158 +0000 UTC m=+1106.929178796" watchObservedRunningTime="2026-02-19 10:03:17.644430245 +0000 UTC m=+1106.933861883" Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.012858 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:59670->10.217.0.161:8443: read: connection reset by peer" Feb 19 10:03:18 crc kubenswrapper[4873]: E0219 10:03:18.233275 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcace1157_1459_4823_aa8f_b2c246d3adeb.slice/crio-conmon-f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcace1157_1459_4823_aa8f_b2c246d3adeb.slice/crio-f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.240247 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.240301 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.484009 4873 scope.go:117] "RemoveContainer" containerID="5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0" Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.572981 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerStarted","Data":"ea2c29e9b6a6ae2111af938dab80bf0fe86bf1c95d1132e043df7ce04e75e6e4"} Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.573030 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerStarted","Data":"3858e9a302b9c6afbe14985a270c0ff8aac1476a89852d70300dc30e43cfb9a9"} Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.576832 4873 generic.go:334] "Generic (PLEG): container finished" podID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerID="f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6" exitCode=0 Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.576885 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerDied","Data":"f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6"} Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.579510 4873 generic.go:334] "Generic (PLEG): container finished" podID="8786cefd-adc3-4acf-bc04-066bc0510131" containerID="f05593f5088b36a20866dde0c189f6365ca5bb5d444303ca92f1e7f75e70ca2f" exitCode=143 Feb 19 10:03:18 crc kubenswrapper[4873]: I0219 10:03:18.580492 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerDied","Data":"f05593f5088b36a20866dde0c189f6365ca5bb5d444303ca92f1e7f75e70ca2f"} Feb 19 10:03:19 crc kubenswrapper[4873]: I0219 10:03:19.009092 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:03:19 crc kubenswrapper[4873]: I0219 10:03:19.592870 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerStarted","Data":"cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96"} Feb 19 10:03:20 crc kubenswrapper[4873]: I0219 10:03:20.842116 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 19 10:03:21 crc kubenswrapper[4873]: I0219 10:03:21.614578 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerStarted","Data":"7e5158670e3a62976761b268e07eaed28eb3274621570c9d17f2e4ba96a59299"} Feb 19 10:03:21 crc kubenswrapper[4873]: I0219 10:03:21.614732 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:21 crc kubenswrapper[4873]: I0219 10:03:21.641480 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.12694094 podStartE2EDuration="6.641453901s" podCreationTimestamp="2026-02-19 10:03:15 +0000 UTC" firstStartedPulling="2026-02-19 10:03:16.932422679 +0000 UTC m=+1106.221854327" lastFinishedPulling="2026-02-19 10:03:20.446935649 +0000 UTC m=+1109.736367288" observedRunningTime="2026-02-19 10:03:21.634805335 +0000 UTC m=+1110.924236973" watchObservedRunningTime="2026-02-19 10:03:21.641453901 +0000 UTC m=+1110.930885539" Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.193653 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.197154 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c4d59d6dd-4nh9w" Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.306158 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.306462 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-785b79c884-tswfl" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api-log" containerID="cri-o://01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611" gracePeriod=30 Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.306569 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-785b79c884-tswfl" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api" containerID="cri-o://626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18" gracePeriod=30 Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.624334 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerID="cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96" exitCode=1 Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.624389 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerDied","Data":"cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96"} Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.624418 4873 scope.go:117] "RemoveContainer" containerID="5d8a6efa61f7a8c09f644a4fe742859469cc638e71ee46cb830c7a9e3cf72be0" Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.625002 4873 scope.go:117] "RemoveContainer" containerID="cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96" Feb 19 10:03:22 crc kubenswrapper[4873]: E0219 10:03:22.625238 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ab7f7779-d6dd-4844-8af5-83ade972d9d0)\"" pod="openstack/watcher-decision-engine-0" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.634671 4873 generic.go:334] "Generic (PLEG): container finished" podID="0d2df48a-78aa-4711-a0ac-268542093658" containerID="01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611" exitCode=143 Feb 19 10:03:22 crc kubenswrapper[4873]: I0219 10:03:22.634734 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerDied","Data":"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611"} Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.243846 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.298178 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.349237 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.424808 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.425037 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="dnsmasq-dns" containerID="cri-o://b6d3e647789f4d02e72fca723b661c289177f805a67c5edd75747ee3947add92" gracePeriod=10 Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.608810 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.667426 4873 generic.go:334] "Generic (PLEG): container finished" podID="0d2df48a-78aa-4711-a0ac-268542093658" containerID="626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18" exitCode=0 Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.667493 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerDied","Data":"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18"} Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.667522 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785b79c884-tswfl" event={"ID":"0d2df48a-78aa-4711-a0ac-268542093658","Type":"ContainerDied","Data":"aa044ff7142e0c26ee94862c0e4c5ca488a9ed1c7a1ffa3af69735d62ea70cbd"} Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.667539 4873 scope.go:117] "RemoveContainer" containerID="626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.667646 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785b79c884-tswfl" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.679485 4873 generic.go:334] "Generic (PLEG): container finished" podID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerID="b6d3e647789f4d02e72fca723b661c289177f805a67c5edd75747ee3947add92" exitCode=0 Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.679719 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="cinder-scheduler" containerID="cri-o://7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783" gracePeriod=30 Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.680039 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" event={"ID":"64deb684-42f6-4bb5-b774-ef57839a56d5","Type":"ContainerDied","Data":"b6d3e647789f4d02e72fca723b661c289177f805a67c5edd75747ee3947add92"} Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.680563 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="probe" containerID="cri-o://be36025eab8065799686eea6527ff5c2140eda51908e0f229e70d6b1bf945a72" gracePeriod=30 Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.684640 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlb7v\" (UniqueName: \"kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v\") pod \"0d2df48a-78aa-4711-a0ac-268542093658\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.684855 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle\") pod \"0d2df48a-78aa-4711-a0ac-268542093658\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.684915 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs\") pod \"0d2df48a-78aa-4711-a0ac-268542093658\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.684959 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data\") pod \"0d2df48a-78aa-4711-a0ac-268542093658\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.685003 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom\") pod \"0d2df48a-78aa-4711-a0ac-268542093658\" (UID: \"0d2df48a-78aa-4711-a0ac-268542093658\") " Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.687000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs" (OuterVolumeSpecName: "logs") pod "0d2df48a-78aa-4711-a0ac-268542093658" (UID: "0d2df48a-78aa-4711-a0ac-268542093658"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.690920 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d2df48a-78aa-4711-a0ac-268542093658" (UID: "0d2df48a-78aa-4711-a0ac-268542093658"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.693277 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v" (OuterVolumeSpecName: "kube-api-access-mlb7v") pod "0d2df48a-78aa-4711-a0ac-268542093658" (UID: "0d2df48a-78aa-4711-a0ac-268542093658"). InnerVolumeSpecName "kube-api-access-mlb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.728227 4873 scope.go:117] "RemoveContainer" containerID="01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.764330 4873 scope.go:117] "RemoveContainer" containerID="626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.764347 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data" (OuterVolumeSpecName: "config-data") pod "0d2df48a-78aa-4711-a0ac-268542093658" (UID: "0d2df48a-78aa-4711-a0ac-268542093658"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.764447 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d2df48a-78aa-4711-a0ac-268542093658" (UID: "0d2df48a-78aa-4711-a0ac-268542093658"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:24 crc kubenswrapper[4873]: E0219 10:03:24.764961 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18\": container with ID starting with 626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18 not found: ID does not exist" containerID="626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.764991 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18"} err="failed to get container status \"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18\": rpc error: code = NotFound desc = could not find container \"626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18\": container with ID starting with 626eef1424fde36fd6c591537282df1773ca875ce665ae3dc8e63713a1a95d18 not found: ID does not exist" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.765011 4873 scope.go:117] "RemoveContainer" containerID="01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611" Feb 19 10:03:24 crc kubenswrapper[4873]: E0219 10:03:24.765235 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611\": container with ID starting with 01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611 not found: ID does not exist" containerID="01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.765255 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611"} err="failed to get container status \"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611\": rpc error: code = NotFound desc = could not find container \"01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611\": container with ID starting with 01bd16909d7a2865478dd541aeaed2f91ba94845036b47bd03dd48be7c376611 not found: ID does not exist" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.791705 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.791746 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d2df48a-78aa-4711-a0ac-268542093658-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.791761 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.791769 4873 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d2df48a-78aa-4711-a0ac-268542093658-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.791780 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlb7v\" (UniqueName: \"kubernetes.io/projected/0d2df48a-78aa-4711-a0ac-268542093658-kube-api-access-mlb7v\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:24 crc kubenswrapper[4873]: I0219 10:03:24.974158 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.051163 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.068138 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-785b79c884-tswfl"] Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.098310 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.098952 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mv6\" (UniqueName: \"kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.099018 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.099050 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.099131 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.099250 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc\") pod \"64deb684-42f6-4bb5-b774-ef57839a56d5\" (UID: \"64deb684-42f6-4bb5-b774-ef57839a56d5\") " Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.115362 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6" (OuterVolumeSpecName: "kube-api-access-x9mv6") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "kube-api-access-x9mv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.144716 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.161770 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.164279 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config" (OuterVolumeSpecName: "config") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.165130 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.189529 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64deb684-42f6-4bb5-b774-ef57839a56d5" (UID: "64deb684-42f6-4bb5-b774-ef57839a56d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202628 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202668 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202678 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202686 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202695 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64deb684-42f6-4bb5-b774-ef57839a56d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.202704 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mv6\" (UniqueName: \"kubernetes.io/projected/64deb684-42f6-4bb5-b774-ef57839a56d5-kube-api-access-x9mv6\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.498398 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2df48a-78aa-4711-a0ac-268542093658" path="/var/lib/kubelet/pods/0d2df48a-78aa-4711-a0ac-268542093658/volumes" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.689328 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" event={"ID":"64deb684-42f6-4bb5-b774-ef57839a56d5","Type":"ContainerDied","Data":"de0152a57feb0720c3ff97d1d52995f66e5e8c9b3cc0aff67e6dfa65b92a668a"} Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.689382 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5687f4c549-n4g4v" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.689423 4873 scope.go:117] "RemoveContainer" containerID="b6d3e647789f4d02e72fca723b661c289177f805a67c5edd75747ee3947add92" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.691622 4873 generic.go:334] "Generic (PLEG): container finished" podID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerID="be36025eab8065799686eea6527ff5c2140eda51908e0f229e70d6b1bf945a72" exitCode=0 Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.691664 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerDied","Data":"be36025eab8065799686eea6527ff5c2140eda51908e0f229e70d6b1bf945a72"} Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.711424 4873 scope.go:117] "RemoveContainer" containerID="2156fdadae7d71bb536233ced37bfe76646867be4fb2b42c0784cff65fb2da11" Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.737510 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:03:25 crc kubenswrapper[4873]: I0219 10:03:25.746277 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5687f4c549-n4g4v"] Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.042343 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.140325 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.140391 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.140405 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.140420 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.141192 4873 scope.go:117] "RemoveContainer" containerID="cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96" Feb 19 10:03:27 crc kubenswrapper[4873]: E0219 10:03:27.141470 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ab7f7779-d6dd-4844-8af5-83ade972d9d0)\"" pod="openstack/watcher-decision-engine-0" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" Feb 19 10:03:27 crc kubenswrapper[4873]: I0219 10:03:27.494744 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" path="/var/lib/kubelet/pods/64deb684-42f6-4bb5-b774-ef57839a56d5/volumes" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.490976 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5fcd445c48-xvpw4" Feb 19 10:03:28 crc kubenswrapper[4873]: E0219 10:03:28.499880 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a649f7b_88cb_4b43_bc71_06ab3237f955.slice/crio-7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a649f7b_88cb_4b43_bc71_06ab3237f955.slice/crio-conmon-7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.773862 4873 generic.go:334] "Generic (PLEG): container finished" podID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerID="7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783" exitCode=0 Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.774066 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerDied","Data":"7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783"} Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.858923 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 10:03:28 crc kubenswrapper[4873]: E0219 10:03:28.859495 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="init" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859520 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="init" Feb 19 10:03:28 crc kubenswrapper[4873]: E0219 10:03:28.859536 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api-log" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859543 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api-log" Feb 19 10:03:28 crc kubenswrapper[4873]: E0219 10:03:28.859557 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859562 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api" Feb 19 10:03:28 crc kubenswrapper[4873]: E0219 10:03:28.859589 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="dnsmasq-dns" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859595 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="dnsmasq-dns" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859785 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859804 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="64deb684-42f6-4bb5-b774-ef57839a56d5" containerName="dnsmasq-dns" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.859829 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2df48a-78aa-4711-a0ac-268542093658" containerName="barbican-api-log" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.860616 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.865203 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5plx7" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.865633 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.865874 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.874444 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.969740 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.983658 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvrr\" (UniqueName: \"kubernetes.io/projected/5c4eb2b5-d272-49ff-938e-3e3359d29f46-kube-api-access-xkvrr\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.983722 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.983761 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:28 crc kubenswrapper[4873]: I0219 10:03:28.983796 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085122 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085214 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085262 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085324 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94s6t\" (UniqueName: \"kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085372 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085445 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data\") pod \"5a649f7b-88cb-4b43-bc71-06ab3237f955\" (UID: \"5a649f7b-88cb-4b43-bc71-06ab3237f955\") " Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085758 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvrr\" (UniqueName: \"kubernetes.io/projected/5c4eb2b5-d272-49ff-938e-3e3359d29f46-kube-api-access-xkvrr\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085799 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085833 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.085849 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.086981 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.091339 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.095647 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4eb2b5-d272-49ff-938e-3e3359d29f46-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.104255 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t" (OuterVolumeSpecName: "kube-api-access-94s6t") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "kube-api-access-94s6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.104356 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts" (OuterVolumeSpecName: "scripts") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.104422 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.104596 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvrr\" (UniqueName: \"kubernetes.io/projected/5c4eb2b5-d272-49ff-938e-3e3359d29f46-kube-api-access-xkvrr\") pod \"openstackclient\" (UID: \"5c4eb2b5-d272-49ff-938e-3e3359d29f46\") " pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.144184 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.187342 4873 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a649f7b-88cb-4b43-bc71-06ab3237f955-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.187375 4873 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.187385 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.187395 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.187404 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94s6t\" (UniqueName: \"kubernetes.io/projected/5a649f7b-88cb-4b43-bc71-06ab3237f955-kube-api-access-94s6t\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.211781 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data" (OuterVolumeSpecName: "config-data") pod "5a649f7b-88cb-4b43-bc71-06ab3237f955" (UID: "5a649f7b-88cb-4b43-bc71-06ab3237f955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.266140 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.288890 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a649f7b-88cb-4b43-bc71-06ab3237f955-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.784162 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5a649f7b-88cb-4b43-bc71-06ab3237f955","Type":"ContainerDied","Data":"d01b29d0a705eb2d4f63512fcea22d2e1a868ce32ce3cff9e21615cd03cc670a"} Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.784226 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.784440 4873 scope.go:117] "RemoveContainer" containerID="be36025eab8065799686eea6527ff5c2140eda51908e0f229e70d6b1bf945a72" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.810413 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.826696 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.836301 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.838784 4873 scope.go:117] "RemoveContainer" containerID="7a2c1898a31bc3c3c98e2419cf3054dc1ef8c249f821db6f4293689243122783" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.877132 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:29 crc kubenswrapper[4873]: E0219 10:03:29.877601 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="probe" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.877622 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="probe" Feb 19 10:03:29 crc kubenswrapper[4873]: E0219 10:03:29.877643 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="cinder-scheduler" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.877651 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="cinder-scheduler" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.877871 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="probe" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.877912 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" containerName="cinder-scheduler" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.879239 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.885044 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:03:29 crc kubenswrapper[4873]: I0219 10:03:29.891235 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009170 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009247 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009308 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009383 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8npk\" (UniqueName: \"kubernetes.io/projected/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-kube-api-access-g8npk\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009414 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.009437 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111053 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111132 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8npk\" (UniqueName: \"kubernetes.io/projected/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-kube-api-access-g8npk\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111165 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111191 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111241 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111279 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.111688 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.116950 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.117554 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-scripts\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.120144 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-config-data\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.122356 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.131080 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8npk\" (UniqueName: \"kubernetes.io/projected/cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1-kube-api-access-g8npk\") pod \"cinder-scheduler-0\" (UID: \"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.241361 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.649663 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.687843 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6696d67b98-wrvnm" Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.797309 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:30 crc kubenswrapper[4873]: W0219 10:03:30.802335 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9b32e6_4f78_4f9c_9fbd_e91b37d110a1.slice/crio-9ad99f1c11152aa7d83b4e1f00e4c64fe31ea74d537313adc0861ddbf7260217 WatchSource:0}: Error finding container 9ad99f1c11152aa7d83b4e1f00e4c64fe31ea74d537313adc0861ddbf7260217: Status 404 returned error can't find the container with id 9ad99f1c11152aa7d83b4e1f00e4c64fe31ea74d537313adc0861ddbf7260217 Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.803223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5c4eb2b5-d272-49ff-938e-3e3359d29f46","Type":"ContainerStarted","Data":"e2481c1a64880156900e4d68507e2c14f2f2aad465852c3198aa97ae6f916e45"} Feb 19 10:03:30 crc kubenswrapper[4873]: I0219 10:03:30.842727 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 19 10:03:31 crc kubenswrapper[4873]: I0219 10:03:31.502026 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a649f7b-88cb-4b43-bc71-06ab3237f955" path="/var/lib/kubelet/pods/5a649f7b-88cb-4b43-bc71-06ab3237f955/volumes" Feb 19 10:03:31 crc kubenswrapper[4873]: I0219 10:03:31.824961 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1","Type":"ContainerStarted","Data":"abdf2115fa92347ecdc390e94a1e3534c23229e1ed23bae8a1715190c991fba3"} Feb 19 10:03:31 crc kubenswrapper[4873]: I0219 10:03:31.825004 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1","Type":"ContainerStarted","Data":"9ad99f1c11152aa7d83b4e1f00e4c64fe31ea74d537313adc0861ddbf7260217"} Feb 19 10:03:32 crc kubenswrapper[4873]: I0219 10:03:32.851103 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1","Type":"ContainerStarted","Data":"46a182591dec41dd73b8c2c2e862a2acb3808dded2e789b0b636592adcb8b401"} Feb 19 10:03:32 crc kubenswrapper[4873]: I0219 10:03:32.877385 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.877359028 podStartE2EDuration="3.877359028s" podCreationTimestamp="2026-02-19 10:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:32.868778184 +0000 UTC m=+1122.158209832" watchObservedRunningTime="2026-02-19 10:03:32.877359028 +0000 UTC m=+1122.166790666" Feb 19 10:03:33 crc kubenswrapper[4873]: I0219 10:03:33.821528 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.241660 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.866302 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.866649 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-central-agent" containerID="cri-o://bde159d1e713bedb9f809d72fb731177094209d5eae3fafbca00c3bf92dd0edf" gracePeriod=30 Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.866723 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="sg-core" containerID="cri-o://ea2c29e9b6a6ae2111af938dab80bf0fe86bf1c95d1132e043df7ce04e75e6e4" gracePeriod=30 Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.866765 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="proxy-httpd" containerID="cri-o://7e5158670e3a62976761b268e07eaed28eb3274621570c9d17f2e4ba96a59299" gracePeriod=30 Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.866766 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-notification-agent" containerID="cri-o://3858e9a302b9c6afbe14985a270c0ff8aac1476a89852d70300dc30e43cfb9a9" gracePeriod=30 Feb 19 10:03:35 crc kubenswrapper[4873]: I0219 10:03:35.889940 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.186:3000/\": EOF" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.427432 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7c6d694569-qbpxm"] Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.429260 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.432574 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.432687 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.433432 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.448300 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c6d694569-qbpxm"] Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576497 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-combined-ca-bundle\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576554 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-log-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576604 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-config-data\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576629 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-internal-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576680 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rmb\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-kube-api-access-72rmb\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576720 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-run-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576738 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-public-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.576757 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-etc-swift\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.678789 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-run-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.678862 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-etc-swift\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.678890 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-public-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.678982 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-combined-ca-bundle\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679030 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-log-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679102 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-config-data\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679163 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-internal-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679251 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rmb\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-kube-api-access-72rmb\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679448 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-run-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.679784 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51beb70-e455-4e75-9e06-863b41fbf9a8-log-httpd\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.686161 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-internal-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.697943 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-config-data\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.699001 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-etc-swift\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.700302 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-public-tls-certs\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.701650 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51beb70-e455-4e75-9e06-863b41fbf9a8-combined-ca-bundle\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.704626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rmb\" (UniqueName: \"kubernetes.io/projected/d51beb70-e455-4e75-9e06-863b41fbf9a8-kube-api-access-72rmb\") pod \"swift-proxy-7c6d694569-qbpxm\" (UID: \"d51beb70-e455-4e75-9e06-863b41fbf9a8\") " pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.719628 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76cc4fb9fc-vdfd4" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.760652 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.836135 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.837226 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-749b6895f6-pmvtl" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-httpd" containerID="cri-o://a79fc5280241e2b1290e7c529d75e24bfc2e34924a4a60f5635bd08c8d066317" gracePeriod=30 Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.836766 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-749b6895f6-pmvtl" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-api" containerID="cri-o://c2fc7030796f36afab7f9dbbf523310f26e22db5ef1e487e914036c0ac971b2a" gracePeriod=30 Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904127 4873 generic.go:334] "Generic (PLEG): container finished" podID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerID="7e5158670e3a62976761b268e07eaed28eb3274621570c9d17f2e4ba96a59299" exitCode=0 Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904168 4873 generic.go:334] "Generic (PLEG): container finished" podID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerID="ea2c29e9b6a6ae2111af938dab80bf0fe86bf1c95d1132e043df7ce04e75e6e4" exitCode=2 Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904179 4873 generic.go:334] "Generic (PLEG): container finished" podID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerID="bde159d1e713bedb9f809d72fb731177094209d5eae3fafbca00c3bf92dd0edf" exitCode=0 Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904202 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerDied","Data":"7e5158670e3a62976761b268e07eaed28eb3274621570c9d17f2e4ba96a59299"} Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904231 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerDied","Data":"ea2c29e9b6a6ae2111af938dab80bf0fe86bf1c95d1132e043df7ce04e75e6e4"} Feb 19 10:03:36 crc kubenswrapper[4873]: I0219 10:03:36.904246 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerDied","Data":"bde159d1e713bedb9f809d72fb731177094209d5eae3fafbca00c3bf92dd0edf"} Feb 19 10:03:37 crc kubenswrapper[4873]: I0219 10:03:37.935398 4873 generic.go:334] "Generic (PLEG): container finished" podID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerID="a79fc5280241e2b1290e7c529d75e24bfc2e34924a4a60f5635bd08c8d066317" exitCode=0 Feb 19 10:03:37 crc kubenswrapper[4873]: I0219 10:03:37.935445 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerDied","Data":"a79fc5280241e2b1290e7c529d75e24bfc2e34924a4a60f5635bd08c8d066317"} Feb 19 10:03:38 crc kubenswrapper[4873]: I0219 10:03:38.947536 4873 generic.go:334] "Generic (PLEG): container finished" podID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerID="3858e9a302b9c6afbe14985a270c0ff8aac1476a89852d70300dc30e43cfb9a9" exitCode=0 Feb 19 10:03:38 crc kubenswrapper[4873]: I0219 10:03:38.947586 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerDied","Data":"3858e9a302b9c6afbe14985a270c0ff8aac1476a89852d70300dc30e43cfb9a9"} Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.830270 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cqfhq"] Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.831692 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.847735 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cqfhq"] Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.937340 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hbt9r"] Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.939045 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.956054 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hbt9r"] Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.970363 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:39 crc kubenswrapper[4873]: I0219 10:03:39.970528 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlp77\" (UniqueName: \"kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.071798 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d9dc-account-create-update-p9wrt"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.073689 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.074125 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gctfv\" (UniqueName: \"kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.074295 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlp77\" (UniqueName: \"kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.074474 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.073728 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.100443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.102015 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.124453 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d9dc-account-create-update-p9wrt"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.165827 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlp77\" (UniqueName: \"kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77\") pod \"nova-api-db-create-cqfhq\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.180984 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.181346 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gctfv\" (UniqueName: \"kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.181719 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.181891 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r42c\" (UniqueName: \"kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.183553 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.183706 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-5862l"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.185590 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.222593 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5862l"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.236642 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gctfv\" (UniqueName: \"kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv\") pod \"nova-cell0-db-create-hbt9r\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.261032 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-00fb-account-create-update-4594l"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.262251 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.263570 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.275484 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.277710 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-00fb-account-create-update-4594l"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.295176 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r42c\" (UniqueName: \"kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.295279 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.295351 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvr7l\" (UniqueName: \"kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.297873 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.299199 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.323267 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r42c\" (UniqueName: \"kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c\") pod \"nova-api-d9dc-account-create-update-p9wrt\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.402646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.402715 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvr7l\" (UniqueName: \"kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.402845 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.402874 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m6d\" (UniqueName: \"kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.403743 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.411051 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.432862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvr7l\" (UniqueName: \"kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l\") pod \"nova-cell1-db-create-5862l\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.451784 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.451952 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-83cd-account-create-update-9h25q"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.455800 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.458533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.467460 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83cd-account-create-update-9h25q"] Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.505388 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.505441 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8m6d\" (UniqueName: \"kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.506665 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.550722 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8m6d\" (UniqueName: \"kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d\") pod \"nova-cell0-00fb-account-create-update-4594l\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.596208 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.607959 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzz7z\" (UniqueName: \"kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.608177 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.609928 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.620019 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.709398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.709513 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzz7z\" (UniqueName: \"kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.710364 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.737656 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzz7z\" (UniqueName: \"kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z\") pod \"nova-cell1-83cd-account-create-update-9h25q\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.815158 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:40 crc kubenswrapper[4873]: I0219 10:03:40.843755 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-87df9b646-2jf26" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 19 10:03:42 crc kubenswrapper[4873]: I0219 10:03:42.003966 4873 generic.go:334] "Generic (PLEG): container finished" podID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerID="c2fc7030796f36afab7f9dbbf523310f26e22db5ef1e487e914036c0ac971b2a" exitCode=0 Feb 19 10:03:42 crc kubenswrapper[4873]: I0219 10:03:42.004011 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerDied","Data":"c2fc7030796f36afab7f9dbbf523310f26e22db5ef1e487e914036c0ac971b2a"} Feb 19 10:03:42 crc kubenswrapper[4873]: I0219 10:03:42.484833 4873 scope.go:117] "RemoveContainer" containerID="cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.551201 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.604858 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605308 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxxwz\" (UniqueName: \"kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605355 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605386 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605440 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605475 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.605551 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd\") pod \"437f5e56-e7c4-4280-9f75-2cf9e2496375\" (UID: \"437f5e56-e7c4-4280-9f75-2cf9e2496375\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.608352 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.609519 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.635495 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz" (OuterVolumeSpecName: "kube-api-access-sxxwz") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "kube-api-access-sxxwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.649356 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts" (OuterVolumeSpecName: "scripts") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.710287 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.710324 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxxwz\" (UniqueName: \"kubernetes.io/projected/437f5e56-e7c4-4280-9f75-2cf9e2496375-kube-api-access-sxxwz\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.710335 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.710343 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/437f5e56-e7c4-4280-9f75-2cf9e2496375-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.714756 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.783655 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.786624 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.820248 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkk7t\" (UniqueName: \"kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t\") pod \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.820302 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config\") pod \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.820332 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs\") pod \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.820394 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config\") pod \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.821010 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle\") pod \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\" (UID: \"3e2e96b4-be71-4257-a1ed-0c7427ed0e64\") " Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.822596 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.822613 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.831643 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3e2e96b4-be71-4257-a1ed-0c7427ed0e64" (UID: "3e2e96b4-be71-4257-a1ed-0c7427ed0e64"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.832174 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t" (OuterVolumeSpecName: "kube-api-access-zkk7t") pod "3e2e96b4-be71-4257-a1ed-0c7427ed0e64" (UID: "3e2e96b4-be71-4257-a1ed-0c7427ed0e64"). InnerVolumeSpecName "kube-api-access-zkk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.909714 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e2e96b4-be71-4257-a1ed-0c7427ed0e64" (UID: "3e2e96b4-be71-4257-a1ed-0c7427ed0e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.914227 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data" (OuterVolumeSpecName: "config-data") pod "437f5e56-e7c4-4280-9f75-2cf9e2496375" (UID: "437f5e56-e7c4-4280-9f75-2cf9e2496375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.924959 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkk7t\" (UniqueName: \"kubernetes.io/projected/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-kube-api-access-zkk7t\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.925290 4873 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.925388 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.925487 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/437f5e56-e7c4-4280-9f75-2cf9e2496375-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.934313 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config" (OuterVolumeSpecName: "config") pod "3e2e96b4-be71-4257-a1ed-0c7427ed0e64" (UID: "3e2e96b4-be71-4257-a1ed-0c7427ed0e64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:43 crc kubenswrapper[4873]: I0219 10:03:43.973635 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3e2e96b4-be71-4257-a1ed-0c7427ed0e64" (UID: "3e2e96b4-be71-4257-a1ed-0c7427ed0e64"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.028998 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerStarted","Data":"747cd165cbbee3ecae96cc8c9648bca4b8f233bd477999cf09756be76185ea16"} Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.033516 4873 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.033540 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e2e96b4-be71-4257-a1ed-0c7427ed0e64-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.045472 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.045478 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"437f5e56-e7c4-4280-9f75-2cf9e2496375","Type":"ContainerDied","Data":"a72128a70548416ae211c60013a87319728fca02cd7888fa60778dec8ba63ea4"} Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.045531 4873 scope.go:117] "RemoveContainer" containerID="7e5158670e3a62976761b268e07eaed28eb3274621570c9d17f2e4ba96a59299" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.048692 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5c4eb2b5-d272-49ff-938e-3e3359d29f46","Type":"ContainerStarted","Data":"cdb538c387e4bf3b5eec3475b8fd03ad3756c6e3d42dfe7494494ac40869cd29"} Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.051664 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-749b6895f6-pmvtl" event={"ID":"3e2e96b4-be71-4257-a1ed-0c7427ed0e64","Type":"ContainerDied","Data":"936d413c07d2e70cda379bc1d9e56c3d69a0e75d48e2a897c8fb38cdf7c08e5e"} Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.051747 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-749b6895f6-pmvtl" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.165428 4873 scope.go:117] "RemoveContainer" containerID="ea2c29e9b6a6ae2111af938dab80bf0fe86bf1c95d1132e043df7ce04e75e6e4" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.178189 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.7437920829999998 podStartE2EDuration="16.178170369s" podCreationTimestamp="2026-02-19 10:03:28 +0000 UTC" firstStartedPulling="2026-02-19 10:03:29.853229831 +0000 UTC m=+1119.142661469" lastFinishedPulling="2026-02-19 10:03:43.287608107 +0000 UTC m=+1132.577039755" observedRunningTime="2026-02-19 10:03:44.093822159 +0000 UTC m=+1133.383253817" watchObservedRunningTime="2026-02-19 10:03:44.178170369 +0000 UTC m=+1133.467602007" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.180143 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.194834 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-749b6895f6-pmvtl"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.203289 4873 scope.go:117] "RemoveContainer" containerID="3858e9a302b9c6afbe14985a270c0ff8aac1476a89852d70300dc30e43cfb9a9" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.207209 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.229174 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.232258 4873 scope.go:117] "RemoveContainer" containerID="bde159d1e713bedb9f809d72fb731177094209d5eae3fafbca00c3bf92dd0edf" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240243 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240771 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-central-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240789 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-central-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240804 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="proxy-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240812 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="proxy-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240828 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240836 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240850 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="sg-core" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240857 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="sg-core" Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240888 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-api" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240895 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-api" Feb 19 10:03:44 crc kubenswrapper[4873]: E0219 10:03:44.240909 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-notification-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.240917 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-notification-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241135 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241153 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="sg-core" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241163 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="proxy-httpd" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241174 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" containerName="neutron-api" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241180 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-central-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.241197 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" containerName="ceilometer-notification-agent" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.243993 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.251568 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.256737 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.256769 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.269018 4873 scope.go:117] "RemoveContainer" containerID="a79fc5280241e2b1290e7c529d75e24bfc2e34924a4a60f5635bd08c8d066317" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.323053 4873 scope.go:117] "RemoveContainer" containerID="c2fc7030796f36afab7f9dbbf523310f26e22db5ef1e487e914036c0ac971b2a" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.339832 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.339923 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.339963 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.340077 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nsr\" (UniqueName: \"kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.340146 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.340233 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.340283 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.346799 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cqfhq"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.371268 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83cd-account-create-update-9h25q"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442133 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6nsr\" (UniqueName: \"kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442198 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442263 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442303 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442324 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442360 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.442383 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.445989 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.448782 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.450768 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.451320 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.463011 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.464534 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.475343 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6nsr\" (UniqueName: \"kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr\") pod \"ceilometer-0\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.485642 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c6d694569-qbpxm"] Feb 19 10:03:44 crc kubenswrapper[4873]: W0219 10:03:44.493175 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd51beb70_e455_4e75_9e06_863b41fbf9a8.slice/crio-30a439a6bb41129640b590b3b0580fc88086492e7a41d0d8fdaff456a8c21e64 WatchSource:0}: Error finding container 30a439a6bb41129640b590b3b0580fc88086492e7a41d0d8fdaff456a8c21e64: Status 404 returned error can't find the container with id 30a439a6bb41129640b590b3b0580fc88086492e7a41d0d8fdaff456a8c21e64 Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.584164 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-00fb-account-create-update-4594l"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.584285 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.598063 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hbt9r"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.649094 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d9dc-account-create-update-p9wrt"] Feb 19 10:03:44 crc kubenswrapper[4873]: I0219 10:03:44.664522 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-5862l"] Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.073786 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c6d694569-qbpxm" event={"ID":"d51beb70-e455-4e75-9e06-863b41fbf9a8","Type":"ContainerStarted","Data":"45a3b4a3ee6b7a836225d08c195c006061b0cf7e34c91e5b09b7e0fd8b04caad"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.074180 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c6d694569-qbpxm" event={"ID":"d51beb70-e455-4e75-9e06-863b41fbf9a8","Type":"ContainerStarted","Data":"30a439a6bb41129640b590b3b0580fc88086492e7a41d0d8fdaff456a8c21e64"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.078967 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cqfhq" event={"ID":"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6","Type":"ContainerStarted","Data":"bd733b3ac9bfee3b4bbee18624c05c5bec301e1fa09008f9d0b7376ff957c31a"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.079016 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cqfhq" event={"ID":"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6","Type":"ContainerStarted","Data":"22f8a0eea83050aa51d42973c19c5beb0e4d9dd72c9f715f981a45eaa0d280da"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.081146 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hbt9r" event={"ID":"b1d06337-fba1-4b9c-abbc-02f635fd3bdd","Type":"ContainerStarted","Data":"e8d94d086cbc8186ccd052540733a5a349014300ab2753d49e5a2c1a63f70e41"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.083795 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" event={"ID":"3129fa03-2686-49af-a434-341b19fb6661","Type":"ContainerStarted","Data":"3eb25a41aa1ebe3aef914232f8db296d6511713d93dc4071313bddb79d55fb80"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.085060 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" event={"ID":"79bae2a9-56d6-4292-b84b-c346934e5e08","Type":"ContainerStarted","Data":"4b38c8bb0d66f0eab5c9674afb8862e41ff03592dbf71798d72618e652e32219"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.085091 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" event={"ID":"79bae2a9-56d6-4292-b84b-c346934e5e08","Type":"ContainerStarted","Data":"65a7fa1ee7b5178acce98eedba35f101b183e27405952b28bd1e62beeb042654"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.088931 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5862l" event={"ID":"c7623a19-7720-48a2-9a09-7c1d9d1acf3a","Type":"ContainerStarted","Data":"429f383cce180b32b32f77705894926ca78081e4e70b072956064f0bd6f1a12e"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.090961 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-00fb-account-create-update-4594l" event={"ID":"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0","Type":"ContainerStarted","Data":"2c40ec07bf5b6c41e9dacb848da00df9379b151069b0e2d66aa2102a14ad638c"} Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.101701 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-cqfhq" podStartSLOduration=6.101680914 podStartE2EDuration="6.101680914s" podCreationTimestamp="2026-02-19 10:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:45.096019103 +0000 UTC m=+1134.385450741" watchObservedRunningTime="2026-02-19 10:03:45.101680914 +0000 UTC m=+1134.391112552" Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.117381 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" podStartSLOduration=5.117358566 podStartE2EDuration="5.117358566s" podCreationTimestamp="2026-02-19 10:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:45.113360446 +0000 UTC m=+1134.402792084" watchObservedRunningTime="2026-02-19 10:03:45.117358566 +0000 UTC m=+1134.406790204" Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.289823 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.501021 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e2e96b4-be71-4257-a1ed-0c7427ed0e64" path="/var/lib/kubelet/pods/3e2e96b4-be71-4257-a1ed-0c7427ed0e64/volumes" Feb 19 10:03:45 crc kubenswrapper[4873]: I0219 10:03:45.501674 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437f5e56-e7c4-4280-9f75-2cf9e2496375" path="/var/lib/kubelet/pods/437f5e56-e7c4-4280-9f75-2cf9e2496375/volumes" Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.106211 4873 generic.go:334] "Generic (PLEG): container finished" podID="3129fa03-2686-49af-a434-341b19fb6661" containerID="e32aae1cb5da5f588b5186b7220b1239b5386c9e999d9330ceeb577323a9711c" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.106685 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" event={"ID":"3129fa03-2686-49af-a434-341b19fb6661","Type":"ContainerDied","Data":"e32aae1cb5da5f588b5186b7220b1239b5386c9e999d9330ceeb577323a9711c"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.123922 4873 generic.go:334] "Generic (PLEG): container finished" podID="bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" containerID="9a04cebf97180c8ea6d0724c6fe0c31aa2fbc8062f300b3608d26c13788862d9" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.124027 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-00fb-account-create-update-4594l" event={"ID":"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0","Type":"ContainerDied","Data":"9a04cebf97180c8ea6d0724c6fe0c31aa2fbc8062f300b3608d26c13788862d9"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.129861 4873 generic.go:334] "Generic (PLEG): container finished" podID="79bae2a9-56d6-4292-b84b-c346934e5e08" containerID="4b38c8bb0d66f0eab5c9674afb8862e41ff03592dbf71798d72618e652e32219" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.129919 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" event={"ID":"79bae2a9-56d6-4292-b84b-c346934e5e08","Type":"ContainerDied","Data":"4b38c8bb0d66f0eab5c9674afb8862e41ff03592dbf71798d72618e652e32219"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.133286 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c6d694569-qbpxm" event={"ID":"d51beb70-e455-4e75-9e06-863b41fbf9a8","Type":"ContainerStarted","Data":"7c8ef61c6a061d0ef8448c266c728f9f150e0c74e880a0ed8a4a67e766db6b55"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.133882 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.134375 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.137574 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerStarted","Data":"b9591c30a9a01ef428ff823d1f169650f4d7ce1f33aa25a91037d30f706c516b"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.137619 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerStarted","Data":"6bdc124061692b5087d0caebc56c02311827bc3d5ab68485c02dae362189e383"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.156848 4873 generic.go:334] "Generic (PLEG): container finished" podID="3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" containerID="bd733b3ac9bfee3b4bbee18624c05c5bec301e1fa09008f9d0b7376ff957c31a" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.156952 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cqfhq" event={"ID":"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6","Type":"ContainerDied","Data":"bd733b3ac9bfee3b4bbee18624c05c5bec301e1fa09008f9d0b7376ff957c31a"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.159032 4873 generic.go:334] "Generic (PLEG): container finished" podID="b1d06337-fba1-4b9c-abbc-02f635fd3bdd" containerID="dd8d0b4c8e6c8fa16639b3273dca3bab2c82aa1c797c85d4fed1f4b2808775ab" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.159223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hbt9r" event={"ID":"b1d06337-fba1-4b9c-abbc-02f635fd3bdd","Type":"ContainerDied","Data":"dd8d0b4c8e6c8fa16639b3273dca3bab2c82aa1c797c85d4fed1f4b2808775ab"} Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.177878 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7c6d694569-qbpxm" podStartSLOduration=10.177861607 podStartE2EDuration="10.177861607s" podCreationTimestamp="2026-02-19 10:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:46.161349995 +0000 UTC m=+1135.450781633" watchObservedRunningTime="2026-02-19 10:03:46.177861607 +0000 UTC m=+1135.467293245" Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.179623 4873 generic.go:334] "Generic (PLEG): container finished" podID="c7623a19-7720-48a2-9a09-7c1d9d1acf3a" containerID="4f7932193028af20a89fc4d6ec905cbeaeae8f2a0c2eccdd691dcdae0d83a150" exitCode=0 Feb 19 10:03:46 crc kubenswrapper[4873]: I0219 10:03:46.179691 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5862l" event={"ID":"c7623a19-7720-48a2-9a09-7c1d9d1acf3a","Type":"ContainerDied","Data":"4f7932193028af20a89fc4d6ec905cbeaeae8f2a0c2eccdd691dcdae0d83a150"} Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.135043 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.171517 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.193274 4873 generic.go:334] "Generic (PLEG): container finished" podID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerID="1a1b6f4ba694daddb17f029a0bbce06c79e8294e69f096dade9d91ac98c03f81" exitCode=137 Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.193392 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerDied","Data":"1a1b6f4ba694daddb17f029a0bbce06c79e8294e69f096dade9d91ac98c03f81"} Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.196317 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerStarted","Data":"285d8f5f3df00508865f8d3bde2dfe35df29620af39cdf5820cccfea23f27068"} Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.196633 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.246648 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.329971 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.704651 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.867069 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts\") pod \"79bae2a9-56d6-4292-b84b-c346934e5e08\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.867209 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzz7z\" (UniqueName: \"kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z\") pod \"79bae2a9-56d6-4292-b84b-c346934e5e08\" (UID: \"79bae2a9-56d6-4292-b84b-c346934e5e08\") " Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.868909 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79bae2a9-56d6-4292-b84b-c346934e5e08" (UID: "79bae2a9-56d6-4292-b84b-c346934e5e08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.886587 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z" (OuterVolumeSpecName: "kube-api-access-tzz7z") pod "79bae2a9-56d6-4292-b84b-c346934e5e08" (UID: "79bae2a9-56d6-4292-b84b-c346934e5e08"). InnerVolumeSpecName "kube-api-access-tzz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.970807 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bae2a9-56d6-4292-b84b-c346934e5e08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:47 crc kubenswrapper[4873]: I0219 10:03:47.970838 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzz7z\" (UniqueName: \"kubernetes.io/projected/79bae2a9-56d6-4292-b84b-c346934e5e08-kube-api-access-tzz7z\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.166535 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.189504 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.192451 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.211654 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.218508 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-5862l" event={"ID":"c7623a19-7720-48a2-9a09-7c1d9d1acf3a","Type":"ContainerDied","Data":"429f383cce180b32b32f77705894926ca78081e4e70b072956064f0bd6f1a12e"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.218546 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429f383cce180b32b32f77705894926ca78081e4e70b072956064f0bd6f1a12e" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.218612 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-5862l" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.233527 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" event={"ID":"3129fa03-2686-49af-a434-341b19fb6661","Type":"ContainerDied","Data":"3eb25a41aa1ebe3aef914232f8db296d6511713d93dc4071313bddb79d55fb80"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.233559 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb25a41aa1ebe3aef914232f8db296d6511713d93dc4071313bddb79d55fb80" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.233659 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d9dc-account-create-update-p9wrt" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.240477 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.240531 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.245422 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.246746 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87df9b646-2jf26" event={"ID":"cace1157-1459-4823-aa8f-b2c246d3adeb","Type":"ContainerDied","Data":"9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.246780 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b51de3389b17ee12d7f59af44e9eef14565d045f0ca62918a72e7a072d8c72e" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.247758 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.248181 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hbt9r" event={"ID":"b1d06337-fba1-4b9c-abbc-02f635fd3bdd","Type":"ContainerDied","Data":"e8d94d086cbc8186ccd052540733a5a349014300ab2753d49e5a2c1a63f70e41"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.248197 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d94d086cbc8186ccd052540733a5a349014300ab2753d49e5a2c1a63f70e41" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.248236 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hbt9r" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.254797 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerStarted","Data":"1c794715d03a80d14cb4c1609cba1fa9ac955e06ef46f83a91a22fdc84e32a39"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.259917 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cqfhq" event={"ID":"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6","Type":"ContainerDied","Data":"22f8a0eea83050aa51d42973c19c5beb0e4d9dd72c9f715f981a45eaa0d280da"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.259945 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22f8a0eea83050aa51d42973c19c5beb0e4d9dd72c9f715f981a45eaa0d280da" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.259996 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cqfhq" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.262961 4873 generic.go:334] "Generic (PLEG): container finished" podID="8786cefd-adc3-4acf-bc04-066bc0510131" containerID="d65db798423e51c2a9f8d6a3012c9ccc857964209249f5fc3748a28883833968" exitCode=137 Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.263015 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerDied","Data":"d65db798423e51c2a9f8d6a3012c9ccc857964209249f5fc3748a28883833968"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.268484 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-00fb-account-create-update-4594l" event={"ID":"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0","Type":"ContainerDied","Data":"2c40ec07bf5b6c41e9dacb848da00df9379b151069b0e2d66aa2102a14ad638c"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.268510 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-00fb-account-create-update-4594l" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.268529 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c40ec07bf5b6c41e9dacb848da00df9379b151069b0e2d66aa2102a14ad638c" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.273539 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerID="747cd165cbbee3ecae96cc8c9648bca4b8f233bd477999cf09756be76185ea16" exitCode=1 Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.273779 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerDied","Data":"747cd165cbbee3ecae96cc8c9648bca4b8f233bd477999cf09756be76185ea16"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.273847 4873 scope.go:117] "RemoveContainer" containerID="cc54b787c703c958a190db022b86cf50c377c895c0e9b21773e78b4356509d96" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.273935 4873 scope.go:117] "RemoveContainer" containerID="747cd165cbbee3ecae96cc8c9648bca4b8f233bd477999cf09756be76185ea16" Feb 19 10:03:48 crc kubenswrapper[4873]: E0219 10:03:48.274371 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ab7f7779-d6dd-4844-8af5-83ade972d9d0)\"" pod="openstack/watcher-decision-engine-0" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.275708 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" event={"ID":"79bae2a9-56d6-4292-b84b-c346934e5e08","Type":"ContainerDied","Data":"65a7fa1ee7b5178acce98eedba35f101b183e27405952b28bd1e62beeb042654"} Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.275743 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a7fa1ee7b5178acce98eedba35f101b183e27405952b28bd1e62beeb042654" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.275721 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83cd-account-create-update-9h25q" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.278906 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8m6d\" (UniqueName: \"kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d\") pod \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.278979 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts\") pod \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.279856 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7623a19-7720-48a2-9a09-7c1d9d1acf3a" (UID: "c7623a19-7720-48a2-9a09-7c1d9d1acf3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.280793 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts\") pod \"3129fa03-2686-49af-a434-341b19fb6661\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.280821 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts\") pod \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.280843 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r42c\" (UniqueName: \"kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c\") pod \"3129fa03-2686-49af-a434-341b19fb6661\" (UID: \"3129fa03-2686-49af-a434-341b19fb6661\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.280892 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts\") pod \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\" (UID: \"bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.281034 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvr7l\" (UniqueName: \"kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l\") pod \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\" (UID: \"c7623a19-7720-48a2-9a09-7c1d9d1acf3a\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.281244 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlp77\" (UniqueName: \"kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77\") pod \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\" (UID: \"3c26aa2d-a8f4-4645-a1b6-055cb88e64d6\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.281861 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3129fa03-2686-49af-a434-341b19fb6661" (UID: "3129fa03-2686-49af-a434-341b19fb6661"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.283269 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" (UID: "bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.284371 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.284391 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3129fa03-2686-49af-a434-341b19fb6661-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.284403 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.292610 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" (UID: "3c26aa2d-a8f4-4645-a1b6-055cb88e64d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.300248 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l" (OuterVolumeSpecName: "kube-api-access-mvr7l") pod "c7623a19-7720-48a2-9a09-7c1d9d1acf3a" (UID: "c7623a19-7720-48a2-9a09-7c1d9d1acf3a"). InnerVolumeSpecName "kube-api-access-mvr7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.305447 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c" (OuterVolumeSpecName: "kube-api-access-8r42c") pod "3129fa03-2686-49af-a434-341b19fb6661" (UID: "3129fa03-2686-49af-a434-341b19fb6661"). InnerVolumeSpecName "kube-api-access-8r42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.305688 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77" (OuterVolumeSpecName: "kube-api-access-wlp77") pod "3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" (UID: "3c26aa2d-a8f4-4645-a1b6-055cb88e64d6"). InnerVolumeSpecName "kube-api-access-wlp77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.330668 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d" (OuterVolumeSpecName: "kube-api-access-q8m6d") pod "bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" (UID: "bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0"). InnerVolumeSpecName "kube-api-access-q8m6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386009 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386158 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386189 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gctfv\" (UniqueName: \"kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv\") pod \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386251 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386313 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts\") pod \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\" (UID: \"b1d06337-fba1-4b9c-abbc-02f635fd3bdd\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386350 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt75m\" (UniqueName: \"kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386408 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386453 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386579 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key\") pod \"cace1157-1459-4823-aa8f-b2c246d3adeb\" (UID: \"cace1157-1459-4823-aa8f-b2c246d3adeb\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.386820 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1d06337-fba1-4b9c-abbc-02f635fd3bdd" (UID: "b1d06337-fba1-4b9c-abbc-02f635fd3bdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387172 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvr7l\" (UniqueName: \"kubernetes.io/projected/c7623a19-7720-48a2-9a09-7c1d9d1acf3a-kube-api-access-mvr7l\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387197 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlp77\" (UniqueName: \"kubernetes.io/projected/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-kube-api-access-wlp77\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387209 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8m6d\" (UniqueName: \"kubernetes.io/projected/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0-kube-api-access-q8m6d\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387223 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387234 4873 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.387245 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r42c\" (UniqueName: \"kubernetes.io/projected/3129fa03-2686-49af-a434-341b19fb6661-kube-api-access-8r42c\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.388158 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs" (OuterVolumeSpecName: "logs") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.391970 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m" (OuterVolumeSpecName: "kube-api-access-jt75m") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "kube-api-access-jt75m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.392526 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.402729 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv" (OuterVolumeSpecName: "kube-api-access-gctfv") pod "b1d06337-fba1-4b9c-abbc-02f635fd3bdd" (UID: "b1d06337-fba1-4b9c-abbc-02f635fd3bdd"). InnerVolumeSpecName "kube-api-access-gctfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.424231 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.436477 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data" (OuterVolumeSpecName: "config-data") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.448906 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts" (OuterVolumeSpecName: "scripts") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.452504 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.478073 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "cace1157-1459-4823-aa8f-b2c246d3adeb" (UID: "cace1157-1459-4823-aa8f-b2c246d3adeb"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.494851 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.494978 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495021 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdhrt\" (UniqueName: \"kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495200 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495230 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495262 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data\") pod \"8786cefd-adc3-4acf-bc04-066bc0510131\" (UID: \"8786cefd-adc3-4acf-bc04-066bc0510131\") " Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495730 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cace1157-1459-4823-aa8f-b2c246d3adeb-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495746 4873 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495760 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495772 4873 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cace1157-1459-4823-aa8f-b2c246d3adeb-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495783 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gctfv\" (UniqueName: \"kubernetes.io/projected/b1d06337-fba1-4b9c-abbc-02f635fd3bdd-kube-api-access-gctfv\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495796 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495808 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt75m\" (UniqueName: \"kubernetes.io/projected/cace1157-1459-4823-aa8f-b2c246d3adeb-kube-api-access-jt75m\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.495819 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cace1157-1459-4823-aa8f-b2c246d3adeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.499508 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.500591 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts" (OuterVolumeSpecName: "scripts") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.501327 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.501717 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs" (OuterVolumeSpecName: "logs") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.504448 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt" (OuterVolumeSpecName: "kube-api-access-vdhrt") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "kube-api-access-vdhrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.541870 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.580607 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data" (OuterVolumeSpecName: "config-data") pod "8786cefd-adc3-4acf-bc04-066bc0510131" (UID: "8786cefd-adc3-4acf-bc04-066bc0510131"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597592 4873 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8786cefd-adc3-4acf-bc04-066bc0510131-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597622 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597631 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597642 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597651 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8786cefd-adc3-4acf-bc04-066bc0510131-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597659 4873 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8786cefd-adc3-4acf-bc04-066bc0510131-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4873]: I0219 10:03:48.597668 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdhrt\" (UniqueName: \"kubernetes.io/projected/8786cefd-adc3-4acf-bc04-066bc0510131-kube-api-access-vdhrt\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.295407 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8786cefd-adc3-4acf-bc04-066bc0510131","Type":"ContainerDied","Data":"8e464848b3ebc3175565441b190b29e16936e4f1ed928d10cd26c6f756af71c1"} Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.295684 4873 scope.go:117] "RemoveContainer" containerID="d65db798423e51c2a9f8d6a3012c9ccc857964209249f5fc3748a28883833968" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.295825 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.306158 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87df9b646-2jf26" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.330295 4873 scope.go:117] "RemoveContainer" containerID="f05593f5088b36a20866dde0c189f6365ca5bb5d444303ca92f1e7f75e70ca2f" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.368665 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.382540 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-87df9b646-2jf26"] Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.392796 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.402921 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.425743 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426183 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7623a19-7720-48a2-9a09-7c1d9d1acf3a" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426207 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7623a19-7720-48a2-9a09-7c1d9d1acf3a" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426228 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3129fa03-2686-49af-a434-341b19fb6661" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426238 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3129fa03-2686-49af-a434-341b19fb6661" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426248 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426257 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426272 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426280 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426297 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bae2a9-56d6-4292-b84b-c346934e5e08" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426304 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bae2a9-56d6-4292-b84b-c346934e5e08" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426321 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon-log" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426328 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon-log" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426338 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426346 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426363 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d06337-fba1-4b9c-abbc-02f635fd3bdd" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426371 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d06337-fba1-4b9c-abbc-02f635fd3bdd" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426388 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426396 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api" Feb 19 10:03:49 crc kubenswrapper[4873]: E0219 10:03:49.426407 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api-log" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426414 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api-log" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426627 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3129fa03-2686-49af-a434-341b19fb6661" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426643 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bae2a9-56d6-4292-b84b-c346934e5e08" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426666 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426674 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d06337-fba1-4b9c-abbc-02f635fd3bdd" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426683 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" containerName="mariadb-account-create-update" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426695 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7623a19-7720-48a2-9a09-7c1d9d1acf3a" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426703 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426712 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" containerName="cinder-api-log" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426721 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" containerName="horizon-log" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.426731 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" containerName="mariadb-database-create" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.427844 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.430437 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.430638 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.430818 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.434311 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.498459 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8786cefd-adc3-4acf-bc04-066bc0510131" path="/var/lib/kubelet/pods/8786cefd-adc3-4acf-bc04-066bc0510131/volumes" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.499121 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cace1157-1459-4823-aa8f-b2c246d3adeb" path="/var/lib/kubelet/pods/cace1157-1459-4823-aa8f-b2c246d3adeb/volumes" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.517996 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518441 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dabe51-c676-42bb-936a-d784ee2e565a-logs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518476 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518522 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrf5\" (UniqueName: \"kubernetes.io/projected/f3dabe51-c676-42bb-936a-d784ee2e565a-kube-api-access-lfrf5\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518575 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-scripts\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518761 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3dabe51-c676-42bb-936a-d784ee2e565a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518788 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.518835 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629561 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-scripts\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629639 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629704 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3dabe51-c676-42bb-936a-d784ee2e565a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629722 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629755 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629786 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629838 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dabe51-c676-42bb-936a-d784ee2e565a-logs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629854 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.629901 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrf5\" (UniqueName: \"kubernetes.io/projected/f3dabe51-c676-42bb-936a-d784ee2e565a-kube-api-access-lfrf5\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.632185 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f3dabe51-c676-42bb-936a-d784ee2e565a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.632438 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3dabe51-c676-42bb-936a-d784ee2e565a-logs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.634127 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.634481 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data-custom\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.635319 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.635420 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.636090 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-config-data\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.636390 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3dabe51-c676-42bb-936a-d784ee2e565a-scripts\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.648155 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrf5\" (UniqueName: \"kubernetes.io/projected/f3dabe51-c676-42bb-936a-d784ee2e565a-kube-api-access-lfrf5\") pod \"cinder-api-0\" (UID: \"f3dabe51-c676-42bb-936a-d784ee2e565a\") " pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.724964 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.746483 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.836880 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle\") pod \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.836929 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data\") pod \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.836976 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs\") pod \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.836994 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca\") pod \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.837068 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzrl\" (UniqueName: \"kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl\") pod \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\" (UID: \"ab7f7779-d6dd-4844-8af5-83ade972d9d0\") " Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.838319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs" (OuterVolumeSpecName: "logs") pod "ab7f7779-d6dd-4844-8af5-83ade972d9d0" (UID: "ab7f7779-d6dd-4844-8af5-83ade972d9d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.838984 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f7779-d6dd-4844-8af5-83ade972d9d0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.856337 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl" (OuterVolumeSpecName: "kube-api-access-6mzrl") pod "ab7f7779-d6dd-4844-8af5-83ade972d9d0" (UID: "ab7f7779-d6dd-4844-8af5-83ade972d9d0"). InnerVolumeSpecName "kube-api-access-6mzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.872246 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab7f7779-d6dd-4844-8af5-83ade972d9d0" (UID: "ab7f7779-d6dd-4844-8af5-83ade972d9d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.888338 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ab7f7779-d6dd-4844-8af5-83ade972d9d0" (UID: "ab7f7779-d6dd-4844-8af5-83ade972d9d0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.914311 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data" (OuterVolumeSpecName: "config-data") pod "ab7f7779-d6dd-4844-8af5-83ade972d9d0" (UID: "ab7f7779-d6dd-4844-8af5-83ade972d9d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.940963 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.941000 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.941010 4873 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ab7f7779-d6dd-4844-8af5-83ade972d9d0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4873]: I0219 10:03:49.941019 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzrl\" (UniqueName: \"kubernetes.io/projected/ab7f7779-d6dd-4844-8af5-83ade972d9d0-kube-api-access-6mzrl\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.196832 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.321157 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ab7f7779-d6dd-4844-8af5-83ade972d9d0","Type":"ContainerDied","Data":"6715424b51c6df78b1881817986335974e70067799bdff519c5527858f40bf0f"} Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.321182 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.321205 4873 scope.go:117] "RemoveContainer" containerID="747cd165cbbee3ecae96cc8c9648bca4b8f233bd477999cf09756be76185ea16" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.323811 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f3dabe51-c676-42bb-936a-d784ee2e565a","Type":"ContainerStarted","Data":"b103a88a763865a4dcb1fcf97e0b3dd3c82cdbe60bd19d5f40ed2c98f6cf9e9e"} Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.430435 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.463297 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.475594 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:50 crc kubenswrapper[4873]: E0219 10:03:50.476046 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476067 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: E0219 10:03:50.476083 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476091 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: E0219 10:03:50.476127 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476137 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476372 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476397 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.476417 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.477245 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.479755 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.487228 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.556613 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.556676 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.556706 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pmh\" (UniqueName: \"kubernetes.io/projected/3ecf8671-28f5-4549-a4c1-0cdad8421837-kube-api-access-55pmh\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.556840 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.556879 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecf8671-28f5-4549-a4c1-0cdad8421837-logs\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.658465 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.658847 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55pmh\" (UniqueName: \"kubernetes.io/projected/3ecf8671-28f5-4549-a4c1-0cdad8421837-kube-api-access-55pmh\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.658984 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.659059 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecf8671-28f5-4549-a4c1-0cdad8421837-logs\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.659242 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.661755 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecf8671-28f5-4549-a4c1-0cdad8421837-logs\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.682269 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.686145 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.691736 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecf8671-28f5-4549-a4c1-0cdad8421837-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.706908 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55pmh\" (UniqueName: \"kubernetes.io/projected/3ecf8671-28f5-4549-a4c1-0cdad8421837-kube-api-access-55pmh\") pod \"watcher-decision-engine-0\" (UID: \"3ecf8671-28f5-4549-a4c1-0cdad8421837\") " pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.742899 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgm8t"] Feb 19 10:03:50 crc kubenswrapper[4873]: E0219 10:03:50.743440 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.743464 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.743723 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" containerName="watcher-decision-engine" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.744535 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.748973 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.751969 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.752159 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c85mr" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.763486 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgm8t"] Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.798809 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.861820 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.861935 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.861960 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.862059 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74xc\" (UniqueName: \"kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.963778 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.963829 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.963902 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74xc\" (UniqueName: \"kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.964016 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.968593 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.968744 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.968896 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:50 crc kubenswrapper[4873]: I0219 10:03:50.983716 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74xc\" (UniqueName: \"kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc\") pod \"nova-cell0-conductor-db-sync-qgm8t\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.079880 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.350307 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f3dabe51-c676-42bb-936a-d784ee2e565a","Type":"ContainerStarted","Data":"2be39cd056becc984b00fb6bbd9d4f93d9116f3c6319509fcab0eca65233b21c"} Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.514991 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7f7779-d6dd-4844-8af5-83ade972d9d0" path="/var/lib/kubelet/pods/ab7f7779-d6dd-4844-8af5-83ade972d9d0/volumes" Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.695614 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgm8t"] Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.798346 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.799915 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c6d694569-qbpxm" Feb 19 10:03:51 crc kubenswrapper[4873]: I0219 10:03:51.831704 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 10:03:51 crc kubenswrapper[4873]: W0219 10:03:51.837747 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ecf8671_28f5_4549_a4c1_0cdad8421837.slice/crio-c65e7d8b914be6bbb256158d479506d09a117eba3db4bd052d6b9f2bf1507ad5 WatchSource:0}: Error finding container c65e7d8b914be6bbb256158d479506d09a117eba3db4bd052d6b9f2bf1507ad5: Status 404 returned error can't find the container with id c65e7d8b914be6bbb256158d479506d09a117eba3db4bd052d6b9f2bf1507ad5 Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.456933 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3ecf8671-28f5-4549-a4c1-0cdad8421837","Type":"ContainerStarted","Data":"9e5b6951ebca0fcf3c7affa218cccea2ee4fb678a4150c91a23f7158b5922791"} Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.457235 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3ecf8671-28f5-4549-a4c1-0cdad8421837","Type":"ContainerStarted","Data":"c65e7d8b914be6bbb256158d479506d09a117eba3db4bd052d6b9f2bf1507ad5"} Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.463653 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" event={"ID":"2f8fe617-c1d5-41f8-a23a-eeb88444f620","Type":"ContainerStarted","Data":"5c51d1e42a2baea6c1a9d92c8fcf55ee9de4da189a67e10f9dca665987216a5f"} Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.472293 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f3dabe51-c676-42bb-936a-d784ee2e565a","Type":"ContainerStarted","Data":"c6f0ae409955b29670854741f2c5f5c3af5ede9713476cd73c53562a322da01b"} Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.473249 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.480211 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerStarted","Data":"00fa225480a93439d5f2b6e9127475ed4c852f38cc7854f8dfea3786315c0e99"} Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.480302 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.4802546469999998 podStartE2EDuration="2.480254647s" podCreationTimestamp="2026-02-19 10:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:52.476029552 +0000 UTC m=+1141.765461190" watchObservedRunningTime="2026-02-19 10:03:52.480254647 +0000 UTC m=+1141.769686295" Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.480359 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.503548 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.503531459 podStartE2EDuration="3.503531459s" podCreationTimestamp="2026-02-19 10:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:52.496969395 +0000 UTC m=+1141.786401033" watchObservedRunningTime="2026-02-19 10:03:52.503531459 +0000 UTC m=+1141.792963097" Feb 19 10:03:52 crc kubenswrapper[4873]: I0219 10:03:52.518992 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.709469941 podStartE2EDuration="8.518973125s" podCreationTimestamp="2026-02-19 10:03:44 +0000 UTC" firstStartedPulling="2026-02-19 10:03:45.317372728 +0000 UTC m=+1134.606804366" lastFinishedPulling="2026-02-19 10:03:51.126875912 +0000 UTC m=+1140.416307550" observedRunningTime="2026-02-19 10:03:52.513676943 +0000 UTC m=+1141.803108601" watchObservedRunningTime="2026-02-19 10:03:52.518973125 +0000 UTC m=+1141.808404763" Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.249636 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.250292 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-central-agent" containerID="cri-o://b9591c30a9a01ef428ff823d1f169650f4d7ce1f33aa25a91037d30f706c516b" gracePeriod=30 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.250403 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="proxy-httpd" containerID="cri-o://00fa225480a93439d5f2b6e9127475ed4c852f38cc7854f8dfea3786315c0e99" gracePeriod=30 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.250443 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="sg-core" containerID="cri-o://1c794715d03a80d14cb4c1609cba1fa9ac955e06ef46f83a91a22fdc84e32a39" gracePeriod=30 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.250470 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-notification-agent" containerID="cri-o://285d8f5f3df00508865f8d3bde2dfe35df29620af39cdf5820cccfea23f27068" gracePeriod=30 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.528976 4873 generic.go:334] "Generic (PLEG): container finished" podID="57a06649-fa26-4970-90c9-23271a1471a5" containerID="00fa225480a93439d5f2b6e9127475ed4c852f38cc7854f8dfea3786315c0e99" exitCode=0 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.529224 4873 generic.go:334] "Generic (PLEG): container finished" podID="57a06649-fa26-4970-90c9-23271a1471a5" containerID="1c794715d03a80d14cb4c1609cba1fa9ac955e06ef46f83a91a22fdc84e32a39" exitCode=2 Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.529062 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerDied","Data":"00fa225480a93439d5f2b6e9127475ed4c852f38cc7854f8dfea3786315c0e99"} Feb 19 10:03:56 crc kubenswrapper[4873]: I0219 10:03:56.529260 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerDied","Data":"1c794715d03a80d14cb4c1609cba1fa9ac955e06ef46f83a91a22fdc84e32a39"} Feb 19 10:03:57 crc kubenswrapper[4873]: I0219 10:03:57.552033 4873 generic.go:334] "Generic (PLEG): container finished" podID="57a06649-fa26-4970-90c9-23271a1471a5" containerID="285d8f5f3df00508865f8d3bde2dfe35df29620af39cdf5820cccfea23f27068" exitCode=0 Feb 19 10:03:57 crc kubenswrapper[4873]: I0219 10:03:57.552072 4873 generic.go:334] "Generic (PLEG): container finished" podID="57a06649-fa26-4970-90c9-23271a1471a5" containerID="b9591c30a9a01ef428ff823d1f169650f4d7ce1f33aa25a91037d30f706c516b" exitCode=0 Feb 19 10:03:57 crc kubenswrapper[4873]: I0219 10:03:57.552093 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerDied","Data":"285d8f5f3df00508865f8d3bde2dfe35df29620af39cdf5820cccfea23f27068"} Feb 19 10:03:57 crc kubenswrapper[4873]: I0219 10:03:57.552133 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerDied","Data":"b9591c30a9a01ef428ff823d1f169650f4d7ce1f33aa25a91037d30f706c516b"} Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.365352 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.366127 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-httpd" containerID="cri-o://78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4" gracePeriod=30 Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.366043 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-log" containerID="cri-o://69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0" gracePeriod=30 Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.593519 4873 generic.go:334] "Generic (PLEG): container finished" podID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerID="69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0" exitCode=143 Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.593567 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerDied","Data":"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0"} Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.799406 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 10:04:00 crc kubenswrapper[4873]: I0219 10:04:00.851437 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.610758 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.660833 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.899499 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.990660 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.990789 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.990869 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.990896 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.990921 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.991007 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6nsr\" (UniqueName: \"kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.991066 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle\") pod \"57a06649-fa26-4970-90c9-23271a1471a5\" (UID: \"57a06649-fa26-4970-90c9-23271a1471a5\") " Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.991151 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.991610 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.991649 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4873]: I0219 10:04:01.998724 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts" (OuterVolumeSpecName: "scripts") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.017280 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.018863 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr" (OuterVolumeSpecName: "kube-api-access-g6nsr") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "kube-api-access-g6nsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.095327 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.095643 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.095654 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57a06649-fa26-4970-90c9-23271a1471a5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.095664 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6nsr\" (UniqueName: \"kubernetes.io/projected/57a06649-fa26-4970-90c9-23271a1471a5-kube-api-access-g6nsr\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.144356 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data" (OuterVolumeSpecName: "config-data") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.148395 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a06649-fa26-4970-90c9-23271a1471a5" (UID: "57a06649-fa26-4970-90c9-23271a1471a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.197275 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.197302 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a06649-fa26-4970-90c9-23271a1471a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.360654 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.537696 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.604098 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.604294 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.604928 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhjk\" (UniqueName: \"kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.605074 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.605194 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.605274 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.605312 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.605348 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle\") pod \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\" (UID: \"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a\") " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.606079 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.606205 4873 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.606932 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs" (OuterVolumeSpecName: "logs") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.610246 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts" (OuterVolumeSpecName: "scripts") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.615049 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.621044 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk" (OuterVolumeSpecName: "kube-api-access-9xhjk") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "kube-api-access-9xhjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.646364 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" event={"ID":"2f8fe617-c1d5-41f8-a23a-eeb88444f620","Type":"ContainerStarted","Data":"e9c86902c9a53b767e99a6b86b96ed298fea1e2244a0785fd8c7eeb7d4f69fa7"} Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.650467 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.663963 4873 generic.go:334] "Generic (PLEG): container finished" podID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerID="78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4" exitCode=0 Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.664042 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerDied","Data":"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4"} Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.664067 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a","Type":"ContainerDied","Data":"27dd31c2ce043db502b2393bca38366353aaef589f4415a46def82efc00bdbd7"} Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.664083 4873 scope.go:117] "RemoveContainer" containerID="78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.664214 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.665675 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" podStartSLOduration=2.803989243 podStartE2EDuration="12.665666362s" podCreationTimestamp="2026-02-19 10:03:50 +0000 UTC" firstStartedPulling="2026-02-19 10:03:51.72540554 +0000 UTC m=+1141.014837178" lastFinishedPulling="2026-02-19 10:04:01.587082659 +0000 UTC m=+1150.876514297" observedRunningTime="2026-02-19 10:04:02.663846307 +0000 UTC m=+1151.953277945" watchObservedRunningTime="2026-02-19 10:04:02.665666362 +0000 UTC m=+1151.955098000" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.695425 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.697437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57a06649-fa26-4970-90c9-23271a1471a5","Type":"ContainerDied","Data":"6bdc124061692b5087d0caebc56c02311827bc3d5ab68485c02dae362189e383"} Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.703241 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data" (OuterVolumeSpecName: "config-data") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708348 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708385 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708423 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708436 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708461 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.708473 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xhjk\" (UniqueName: \"kubernetes.io/projected/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-kube-api-access-9xhjk\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.714302 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" (UID: "0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.736262 4873 scope.go:117] "RemoveContainer" containerID="69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.746654 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.762171 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.810383 4873 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.810409 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.820576 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845162 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845690 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-log" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845708 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-log" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845723 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="sg-core" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845730 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="sg-core" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845750 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845758 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845769 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-central-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845777 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-central-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845788 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="proxy-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845795 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="proxy-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.845810 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-notification-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.845818 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-notification-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846061 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="proxy-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846076 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-httpd" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846088 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="sg-core" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846121 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-central-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846137 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a06649-fa26-4970-90c9-23271a1471a5" containerName="ceilometer-notification-agent" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.846153 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" containerName="glance-log" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.848317 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.855660 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.862061 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.862227 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.887680 4873 scope.go:117] "RemoveContainer" containerID="78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.888343 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4\": container with ID starting with 78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4 not found: ID does not exist" containerID="78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.888390 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4"} err="failed to get container status \"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4\": rpc error: code = NotFound desc = could not find container \"78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4\": container with ID starting with 78be319f5d27debcef5596e89d486ade1ea7d0c8bf4ab5f7c035ba6d936419b4 not found: ID does not exist" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.888421 4873 scope.go:117] "RemoveContainer" containerID="69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0" Feb 19 10:04:02 crc kubenswrapper[4873]: E0219 10:04:02.888772 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0\": container with ID starting with 69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0 not found: ID does not exist" containerID="69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.888799 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0"} err="failed to get container status \"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0\": rpc error: code = NotFound desc = could not find container \"69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0\": container with ID starting with 69f1aeefee26a2a735764a65c138335e9092bf60086c87c1bdd76e2c3a2719f0 not found: ID does not exist" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.888815 4873 scope.go:117] "RemoveContainer" containerID="00fa225480a93439d5f2b6e9127475ed4c852f38cc7854f8dfea3786315c0e99" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912505 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912576 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912601 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912655 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912675 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912849 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdn9\" (UniqueName: \"kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.912889 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.927281 4873 scope.go:117] "RemoveContainer" containerID="1c794715d03a80d14cb4c1609cba1fa9ac955e06ef46f83a91a22fdc84e32a39" Feb 19 10:04:02 crc kubenswrapper[4873]: I0219 10:04:02.959746 4873 scope.go:117] "RemoveContainer" containerID="285d8f5f3df00508865f8d3bde2dfe35df29620af39cdf5820cccfea23f27068" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.007585 4873 scope.go:117] "RemoveContainer" containerID="b9591c30a9a01ef428ff823d1f169650f4d7ce1f33aa25a91037d30f706c516b" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.021957 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022032 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022054 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022094 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022136 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022248 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdn9\" (UniqueName: \"kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022273 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.022862 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.023779 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.024178 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.034230 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.036740 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.042897 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.050898 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdn9\" (UniqueName: \"kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.055991 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.078190 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.095582 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.099925 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.104563 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.105385 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.112226 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126171 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126221 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126264 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126282 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126310 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6g6\" (UniqueName: \"kubernetes.io/projected/c0df7963-e78f-457c-a27f-45c26232cfa7-kube-api-access-zl6g6\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126552 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126621 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.126653 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.179235 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228390 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228663 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228694 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228722 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228763 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228784 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228811 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6g6\" (UniqueName: \"kubernetes.io/projected/c0df7963-e78f-457c-a27f-45c26232cfa7-kube-api-access-zl6g6\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228858 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.228978 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.229475 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.229875 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0df7963-e78f-457c-a27f-45c26232cfa7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.242136 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.244741 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.246359 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.246794 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0df7963-e78f-457c-a27f-45c26232cfa7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.252365 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6g6\" (UniqueName: \"kubernetes.io/projected/c0df7963-e78f-457c-a27f-45c26232cfa7-kube-api-access-zl6g6\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.269452 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0df7963-e78f-457c-a27f-45c26232cfa7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.451537 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.529647 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a" path="/var/lib/kubelet/pods/0a3b30a9-f42d-4ac8-a0d0-9c03d0071c7a/volumes" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.530407 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a06649-fa26-4970-90c9-23271a1471a5" path="/var/lib/kubelet/pods/57a06649-fa26-4970-90c9-23271a1471a5/volumes" Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.560975 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:03 crc kubenswrapper[4873]: I0219 10:04:03.729255 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerStarted","Data":"ef43583f8104679c840d249a101e9c9f6c6b978a9eee554010eb7c86975dede9"} Feb 19 10:04:04 crc kubenswrapper[4873]: I0219 10:04:04.084976 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:04 crc kubenswrapper[4873]: W0219 10:04:04.088631 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0df7963_e78f_457c_a27f_45c26232cfa7.slice/crio-1625a014d94b008c2147d0b553e57391c7230229827cf9bb223d147043ac205c WatchSource:0}: Error finding container 1625a014d94b008c2147d0b553e57391c7230229827cf9bb223d147043ac205c: Status 404 returned error can't find the container with id 1625a014d94b008c2147d0b553e57391c7230229827cf9bb223d147043ac205c Feb 19 10:04:04 crc kubenswrapper[4873]: I0219 10:04:04.766686 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0df7963-e78f-457c-a27f-45c26232cfa7","Type":"ContainerStarted","Data":"8a149e59a052af1d5903a3229e690795f2a465c06a67dc0c23e499c248c2cc1c"} Feb 19 10:04:04 crc kubenswrapper[4873]: I0219 10:04:04.766997 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0df7963-e78f-457c-a27f-45c26232cfa7","Type":"ContainerStarted","Data":"1625a014d94b008c2147d0b553e57391c7230229827cf9bb223d147043ac205c"} Feb 19 10:04:04 crc kubenswrapper[4873]: I0219 10:04:04.789277 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerStarted","Data":"bc2c5b3d4d716eb9c97dbd18c9c4f4216d977eab467433e514f9426fc5b0d5d5"} Feb 19 10:04:04 crc kubenswrapper[4873]: I0219 10:04:04.789325 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerStarted","Data":"9f70d4dce463dab5272327a4c241561e484ce7f47d746707db1ef00e991c55a3"} Feb 19 10:04:05 crc kubenswrapper[4873]: I0219 10:04:05.798280 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0df7963-e78f-457c-a27f-45c26232cfa7","Type":"ContainerStarted","Data":"69a8da314136c9cdd7ff53405c50ce004567d1311733a198b9e9331621ea2eea"} Feb 19 10:04:05 crc kubenswrapper[4873]: I0219 10:04:05.802506 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerStarted","Data":"6c30461b33ded90ce2556c22f47ea9a3bb617a8a47e5f7bb60658aaae8782b55"} Feb 19 10:04:05 crc kubenswrapper[4873]: I0219 10:04:05.833393 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.83336859 podStartE2EDuration="2.83336859s" podCreationTimestamp="2026-02-19 10:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:05.819049842 +0000 UTC m=+1155.108481480" watchObservedRunningTime="2026-02-19 10:04:05.83336859 +0000 UTC m=+1155.122800248" Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.566006 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.566795 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-log" containerID="cri-o://00e17aa3c77a8dac057b0211e38bac6faa2ba84727374dd1e7825f5c8cd0363b" gracePeriod=30 Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.566929 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-httpd" containerID="cri-o://4ba322d3698975ce137f4a01e18bf101e9ef8127a707662eba0de8fcfb0c7ffe" gracePeriod=30 Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.835082 4873 generic.go:334] "Generic (PLEG): container finished" podID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerID="00e17aa3c77a8dac057b0211e38bac6faa2ba84727374dd1e7825f5c8cd0363b" exitCode=143 Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.835143 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerDied","Data":"00e17aa3c77a8dac057b0211e38bac6faa2ba84727374dd1e7825f5c8cd0363b"} Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.838585 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerStarted","Data":"260b2fefe2fc420bc547089ed503be33da9e38484f16bfd89fac4795ad82a3ee"} Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.839033 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:08 crc kubenswrapper[4873]: I0219 10:04:08.860011 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494011087 podStartE2EDuration="6.85998908s" podCreationTimestamp="2026-02-19 10:04:02 +0000 UTC" firstStartedPulling="2026-02-19 10:04:03.593220479 +0000 UTC m=+1152.882652117" lastFinishedPulling="2026-02-19 10:04:07.959198472 +0000 UTC m=+1157.248630110" observedRunningTime="2026-02-19 10:04:08.857557639 +0000 UTC m=+1158.146989287" watchObservedRunningTime="2026-02-19 10:04:08.85998908 +0000 UTC m=+1158.149420728" Feb 19 10:04:09 crc kubenswrapper[4873]: I0219 10:04:09.913796 4873 generic.go:334] "Generic (PLEG): container finished" podID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerID="4ba322d3698975ce137f4a01e18bf101e9ef8127a707662eba0de8fcfb0c7ffe" exitCode=0 Feb 19 10:04:09 crc kubenswrapper[4873]: I0219 10:04:09.913871 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerDied","Data":"4ba322d3698975ce137f4a01e18bf101e9ef8127a707662eba0de8fcfb0c7ffe"} Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.062378 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.158598 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcss\" (UniqueName: \"kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.158651 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.158698 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159360 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159463 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159515 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159560 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159584 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run\") pod \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\" (UID: \"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e\") " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.159742 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs" (OuterVolumeSpecName: "logs") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.160055 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.160361 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.170046 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss" (OuterVolumeSpecName: "kube-api-access-ggcss") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "kube-api-access-ggcss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.175281 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts" (OuterVolumeSpecName: "scripts") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.193441 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.215120 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.244587 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data" (OuterVolumeSpecName: "config-data") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.244647 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" (UID: "1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261563 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261589 4873 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261600 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggcss\" (UniqueName: \"kubernetes.io/projected/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-kube-api-access-ggcss\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261610 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261618 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261626 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.261636 4873 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.283649 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.363133 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.924606 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e","Type":"ContainerDied","Data":"d67a114aa214902359c1e29c718493f6dd023a96ca4f5d6261a47eb5f1d136c6"} Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.924652 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.924667 4873 scope.go:117] "RemoveContainer" containerID="4ba322d3698975ce137f4a01e18bf101e9ef8127a707662eba0de8fcfb0c7ffe" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.957250 4873 scope.go:117] "RemoveContainer" containerID="00e17aa3c77a8dac057b0211e38bac6faa2ba84727374dd1e7825f5c8cd0363b" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.957512 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.968362 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.984422 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:10 crc kubenswrapper[4873]: E0219 10:04:10.984833 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-httpd" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.984849 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-httpd" Feb 19 10:04:10 crc kubenswrapper[4873]: E0219 10:04:10.984866 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-log" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.984874 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-log" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.985062 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-log" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.985082 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" containerName="glance-httpd" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.986081 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.990525 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:04:10 crc kubenswrapper[4873]: I0219 10:04:10.990710 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.017038 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073674 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58zd\" (UniqueName: \"kubernetes.io/projected/09cfd898-398f-41ae-8c45-1ed215b69683-kube-api-access-r58zd\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073742 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-config-data\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073766 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073784 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-scripts\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073799 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-logs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073827 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073895 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.073912 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.175884 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58zd\" (UniqueName: \"kubernetes.io/projected/09cfd898-398f-41ae-8c45-1ed215b69683-kube-api-access-r58zd\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.175966 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-config-data\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.175992 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176015 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-scripts\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176037 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-logs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176087 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176198 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176223 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.176834 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.177008 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-logs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.177025 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09cfd898-398f-41ae-8c45-1ed215b69683-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.181738 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.181808 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-scripts\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.182411 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-config-data\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.192989 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58zd\" (UniqueName: \"kubernetes.io/projected/09cfd898-398f-41ae-8c45-1ed215b69683-kube-api-access-r58zd\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.193819 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09cfd898-398f-41ae-8c45-1ed215b69683-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.206334 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"09cfd898-398f-41ae-8c45-1ed215b69683\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.318712 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.356848 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.357208 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-central-agent" containerID="cri-o://9f70d4dce463dab5272327a4c241561e484ce7f47d746707db1ef00e991c55a3" gracePeriod=30 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.357642 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="proxy-httpd" containerID="cri-o://260b2fefe2fc420bc547089ed503be33da9e38484f16bfd89fac4795ad82a3ee" gracePeriod=30 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.357693 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="sg-core" containerID="cri-o://6c30461b33ded90ce2556c22f47ea9a3bb617a8a47e5f7bb60658aaae8782b55" gracePeriod=30 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.357731 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-notification-agent" containerID="cri-o://bc2c5b3d4d716eb9c97dbd18c9c4f4216d977eab467433e514f9426fc5b0d5d5" gracePeriod=30 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.536796 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e" path="/var/lib/kubelet/pods/1da8b72b-fdc0-4c00-a1da-cdb5e8e04e8e/volumes" Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.926433 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.948011 4873 generic.go:334] "Generic (PLEG): container finished" podID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerID="260b2fefe2fc420bc547089ed503be33da9e38484f16bfd89fac4795ad82a3ee" exitCode=0 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.948056 4873 generic.go:334] "Generic (PLEG): container finished" podID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerID="6c30461b33ded90ce2556c22f47ea9a3bb617a8a47e5f7bb60658aaae8782b55" exitCode=2 Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.948085 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerDied","Data":"260b2fefe2fc420bc547089ed503be33da9e38484f16bfd89fac4795ad82a3ee"} Feb 19 10:04:11 crc kubenswrapper[4873]: I0219 10:04:11.948132 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerDied","Data":"6c30461b33ded90ce2556c22f47ea9a3bb617a8a47e5f7bb60658aaae8782b55"} Feb 19 10:04:12 crc kubenswrapper[4873]: I0219 10:04:12.959844 4873 generic.go:334] "Generic (PLEG): container finished" podID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerID="bc2c5b3d4d716eb9c97dbd18c9c4f4216d977eab467433e514f9426fc5b0d5d5" exitCode=0 Feb 19 10:04:12 crc kubenswrapper[4873]: I0219 10:04:12.960003 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerDied","Data":"bc2c5b3d4d716eb9c97dbd18c9c4f4216d977eab467433e514f9426fc5b0d5d5"} Feb 19 10:04:12 crc kubenswrapper[4873]: I0219 10:04:12.961539 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09cfd898-398f-41ae-8c45-1ed215b69683","Type":"ContainerStarted","Data":"3ce17d2d97b2dc6fe772c16831da4ec0b873c6b5ac2967b58cd02540d9bfbe45"} Feb 19 10:04:12 crc kubenswrapper[4873]: I0219 10:04:12.961565 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09cfd898-398f-41ae-8c45-1ed215b69683","Type":"ContainerStarted","Data":"fb5ed99e6fd5bdd21c675a8343e48db30ee86490a2c63b9c1e93da65969c48b3"} Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.453484 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.453552 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.495776 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.498308 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.976567 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09cfd898-398f-41ae-8c45-1ed215b69683","Type":"ContainerStarted","Data":"1ea2be76674c1bf99146edf11fcce7cce9a4f0501326fefbfc0f8f89106f3f7a"} Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.978462 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:13 crc kubenswrapper[4873]: I0219 10:04:13.978496 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:14 crc kubenswrapper[4873]: I0219 10:04:14.001557 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.00154145 podStartE2EDuration="4.00154145s" podCreationTimestamp="2026-02-19 10:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:13.998432112 +0000 UTC m=+1163.287863750" watchObservedRunningTime="2026-02-19 10:04:14.00154145 +0000 UTC m=+1163.290973088" Feb 19 10:04:15 crc kubenswrapper[4873]: I0219 10:04:15.767875 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:15 crc kubenswrapper[4873]: I0219 10:04:15.770615 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:15.999936 4873 generic.go:334] "Generic (PLEG): container finished" podID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerID="9f70d4dce463dab5272327a4c241561e484ce7f47d746707db1ef00e991c55a3" exitCode=0 Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.000083 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerDied","Data":"9f70d4dce463dab5272327a4c241561e484ce7f47d746707db1ef00e991c55a3"} Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.237237 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393732 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393798 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdn9\" (UniqueName: \"kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393837 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393868 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393909 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393931 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.393959 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd\") pod \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\" (UID: \"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8\") " Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.394953 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.397860 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.400690 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts" (OuterVolumeSpecName: "scripts") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.401026 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9" (OuterVolumeSpecName: "kube-api-access-ptdn9") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "kube-api-access-ptdn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.430094 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.474396 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.496961 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.496997 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.497010 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.497023 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.497034 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptdn9\" (UniqueName: \"kubernetes.io/projected/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-kube-api-access-ptdn9\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.497048 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.501345 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data" (OuterVolumeSpecName: "config-data") pod "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" (UID: "8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:16 crc kubenswrapper[4873]: I0219 10:04:16.599413 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.014324 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8","Type":"ContainerDied","Data":"ef43583f8104679c840d249a101e9c9f6c6b978a9eee554010eb7c86975dede9"} Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.014831 4873 scope.go:117] "RemoveContainer" containerID="260b2fefe2fc420bc547089ed503be33da9e38484f16bfd89fac4795ad82a3ee" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.014403 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.041698 4873 scope.go:117] "RemoveContainer" containerID="6c30461b33ded90ce2556c22f47ea9a3bb617a8a47e5f7bb60658aaae8782b55" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.087161 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.104482 4873 scope.go:117] "RemoveContainer" containerID="bc2c5b3d4d716eb9c97dbd18c9c4f4216d977eab467433e514f9426fc5b0d5d5" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.112641 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.123595 4873 scope.go:117] "RemoveContainer" containerID="9f70d4dce463dab5272327a4c241561e484ce7f47d746707db1ef00e991c55a3" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.125880 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:17 crc kubenswrapper[4873]: E0219 10:04:17.126218 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-notification-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126234 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-notification-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: E0219 10:04:17.126252 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="sg-core" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126260 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="sg-core" Feb 19 10:04:17 crc kubenswrapper[4873]: E0219 10:04:17.126272 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-central-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126278 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-central-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: E0219 10:04:17.126296 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="proxy-httpd" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126301 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="proxy-httpd" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126471 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-notification-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126485 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="proxy-httpd" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126495 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="sg-core" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.126511 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" containerName="ceilometer-central-agent" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.128887 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.134440 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.134836 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.143692 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315357 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315404 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315450 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315541 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbx52\" (UniqueName: \"kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315588 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315699 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.315744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.416928 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.417289 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbx52\" (UniqueName: \"kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.417328 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.417915 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.417415 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.417985 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.418063 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.418096 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.418443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.426294 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.427901 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.428552 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.430163 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.440136 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbx52\" (UniqueName: \"kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52\") pod \"ceilometer-0\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.463385 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.499674 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8" path="/var/lib/kubelet/pods/8efa5d9d-e97a-4fef-bb67-af8bdbe7b3f8/volumes" Feb 19 10:04:17 crc kubenswrapper[4873]: I0219 10:04:17.900419 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:17 crc kubenswrapper[4873]: W0219 10:04:17.911398 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode037b85a_1abe_41da_a113_59129451f35f.slice/crio-14e5ec3f770f34a8b125fd308c847df1b6b0992739a9c9f49371a6a91090b5cf WatchSource:0}: Error finding container 14e5ec3f770f34a8b125fd308c847df1b6b0992739a9c9f49371a6a91090b5cf: Status 404 returned error can't find the container with id 14e5ec3f770f34a8b125fd308c847df1b6b0992739a9c9f49371a6a91090b5cf Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.025690 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerStarted","Data":"14e5ec3f770f34a8b125fd308c847df1b6b0992739a9c9f49371a6a91090b5cf"} Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.240174 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.240476 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.240525 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.241074 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:04:18 crc kubenswrapper[4873]: I0219 10:04:18.241168 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0" gracePeriod=600 Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.037865 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerStarted","Data":"3c42726335cd97f7c97751f39a53a6a1c3ede5f0168be405b1c1c8f5ccc72a65"} Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.038240 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerStarted","Data":"0a629e4ae5a7dbd420903dd59abab426b5d7e8df23fe2344c827772936600ccb"} Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.041049 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0" exitCode=0 Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.041093 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0"} Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.041127 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad"} Feb 19 10:04:19 crc kubenswrapper[4873]: I0219 10:04:19.041141 4873 scope.go:117] "RemoveContainer" containerID="4e9052ea8663914dbd7738866b6f51c9865aab9ba0562919ffd7db3fb01e7ded" Feb 19 10:04:20 crc kubenswrapper[4873]: I0219 10:04:20.051730 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerStarted","Data":"6cc5365f6277dce3c3167a3985183e662248a1d4d6e0a45883e0d5e65cacc65f"} Feb 19 10:04:21 crc kubenswrapper[4873]: I0219 10:04:21.319602 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:04:21 crc kubenswrapper[4873]: I0219 10:04:21.319881 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:04:21 crc kubenswrapper[4873]: I0219 10:04:21.352798 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:04:21 crc kubenswrapper[4873]: I0219 10:04:21.372462 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.075674 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerStarted","Data":"e34e712b4e7100f172666d402654e7bfd335c81523cf9222b376f6ba0a3e3352"} Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.075997 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.077709 4873 generic.go:334] "Generic (PLEG): container finished" podID="2f8fe617-c1d5-41f8-a23a-eeb88444f620" containerID="e9c86902c9a53b767e99a6b86b96ed298fea1e2244a0785fd8c7eeb7d4f69fa7" exitCode=0 Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.080163 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" event={"ID":"2f8fe617-c1d5-41f8-a23a-eeb88444f620","Type":"ContainerDied","Data":"e9c86902c9a53b767e99a6b86b96ed298fea1e2244a0785fd8c7eeb7d4f69fa7"} Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.080200 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.080321 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:04:22 crc kubenswrapper[4873]: I0219 10:04:22.126633 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.70887426 podStartE2EDuration="5.126611411s" podCreationTimestamp="2026-02-19 10:04:17 +0000 UTC" firstStartedPulling="2026-02-19 10:04:17.91350922 +0000 UTC m=+1167.202940858" lastFinishedPulling="2026-02-19 10:04:21.331246371 +0000 UTC m=+1170.620678009" observedRunningTime="2026-02-19 10:04:22.107529254 +0000 UTC m=+1171.396960892" watchObservedRunningTime="2026-02-19 10:04:22.126611411 +0000 UTC m=+1171.416043049" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.519455 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.644733 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts\") pod \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.645170 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle\") pod \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.645348 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74xc\" (UniqueName: \"kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc\") pod \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.645422 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data\") pod \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\" (UID: \"2f8fe617-c1d5-41f8-a23a-eeb88444f620\") " Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.654263 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc" (OuterVolumeSpecName: "kube-api-access-d74xc") pod "2f8fe617-c1d5-41f8-a23a-eeb88444f620" (UID: "2f8fe617-c1d5-41f8-a23a-eeb88444f620"). InnerVolumeSpecName "kube-api-access-d74xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.654377 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts" (OuterVolumeSpecName: "scripts") pod "2f8fe617-c1d5-41f8-a23a-eeb88444f620" (UID: "2f8fe617-c1d5-41f8-a23a-eeb88444f620"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.719474 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f8fe617-c1d5-41f8-a23a-eeb88444f620" (UID: "2f8fe617-c1d5-41f8-a23a-eeb88444f620"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.749543 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.749588 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.749603 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74xc\" (UniqueName: \"kubernetes.io/projected/2f8fe617-c1d5-41f8-a23a-eeb88444f620-kube-api-access-d74xc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.805997 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data" (OuterVolumeSpecName: "config-data") pod "2f8fe617-c1d5-41f8-a23a-eeb88444f620" (UID: "2f8fe617-c1d5-41f8-a23a-eeb88444f620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:23 crc kubenswrapper[4873]: I0219 10:04:23.851642 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8fe617-c1d5-41f8-a23a-eeb88444f620-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.096242 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.096255 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.096262 4873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.096247 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qgm8t" event={"ID":"2f8fe617-c1d5-41f8-a23a-eeb88444f620","Type":"ContainerDied","Data":"5c51d1e42a2baea6c1a9d92c8fcf55ee9de4da189a67e10f9dca665987216a5f"} Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.096470 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c51d1e42a2baea6c1a9d92c8fcf55ee9de4da189a67e10f9dca665987216a5f" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.228566 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:24 crc kubenswrapper[4873]: E0219 10:04:24.229239 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8fe617-c1d5-41f8-a23a-eeb88444f620" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.229256 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8fe617-c1d5-41f8-a23a-eeb88444f620" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.229422 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8fe617-c1d5-41f8-a23a-eeb88444f620" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.230044 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.231796 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.231950 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c85mr" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.252264 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.363404 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.363877 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg566\" (UniqueName: \"kubernetes.io/projected/c25b9f1f-0533-4e00-a926-08639b1b2266-kube-api-access-dg566\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.363956 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.411348 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.465564 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg566\" (UniqueName: \"kubernetes.io/projected/c25b9f1f-0533-4e00-a926-08639b1b2266-kube-api-access-dg566\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.465639 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.465697 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.471553 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.472067 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25b9f1f-0533-4e00-a926-08639b1b2266-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.491177 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg566\" (UniqueName: \"kubernetes.io/projected/c25b9f1f-0533-4e00-a926-08639b1b2266-kube-api-access-dg566\") pod \"nova-cell0-conductor-0\" (UID: \"c25b9f1f-0533-4e00-a926-08639b1b2266\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.574901 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:24 crc kubenswrapper[4873]: I0219 10:04:24.875969 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.039167 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.108647 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c25b9f1f-0533-4e00-a926-08639b1b2266","Type":"ContainerStarted","Data":"5186d19991d4137e906d0001a7a43a897ccefac9bc9021e4c650a7bca263ec2f"} Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.959844 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.960374 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-central-agent" containerID="cri-o://0a629e4ae5a7dbd420903dd59abab426b5d7e8df23fe2344c827772936600ccb" gracePeriod=30 Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.960442 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="proxy-httpd" containerID="cri-o://e34e712b4e7100f172666d402654e7bfd335c81523cf9222b376f6ba0a3e3352" gracePeriod=30 Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.960442 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="sg-core" containerID="cri-o://6cc5365f6277dce3c3167a3985183e662248a1d4d6e0a45883e0d5e65cacc65f" gracePeriod=30 Feb 19 10:04:25 crc kubenswrapper[4873]: I0219 10:04:25.960481 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-notification-agent" containerID="cri-o://3c42726335cd97f7c97751f39a53a6a1c3ede5f0168be405b1c1c8f5ccc72a65" gracePeriod=30 Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.119485 4873 generic.go:334] "Generic (PLEG): container finished" podID="e037b85a-1abe-41da-a113-59129451f35f" containerID="e34e712b4e7100f172666d402654e7bfd335c81523cf9222b376f6ba0a3e3352" exitCode=0 Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.119797 4873 generic.go:334] "Generic (PLEG): container finished" podID="e037b85a-1abe-41da-a113-59129451f35f" containerID="6cc5365f6277dce3c3167a3985183e662248a1d4d6e0a45883e0d5e65cacc65f" exitCode=2 Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.119568 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerDied","Data":"e34e712b4e7100f172666d402654e7bfd335c81523cf9222b376f6ba0a3e3352"} Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.119864 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerDied","Data":"6cc5365f6277dce3c3167a3985183e662248a1d4d6e0a45883e0d5e65cacc65f"} Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.121985 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c25b9f1f-0533-4e00-a926-08639b1b2266","Type":"ContainerStarted","Data":"46e88f8a01f1bbc0f588e2ef530b6b2c65675b7bdc761574c9c73fdc8cdb8472"} Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.122179 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:26 crc kubenswrapper[4873]: I0219 10:04:26.142758 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.142737956 podStartE2EDuration="2.142737956s" podCreationTimestamp="2026-02-19 10:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:26.140976412 +0000 UTC m=+1175.430408050" watchObservedRunningTime="2026-02-19 10:04:26.142737956 +0000 UTC m=+1175.432169594" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.153224 4873 generic.go:334] "Generic (PLEG): container finished" podID="e037b85a-1abe-41da-a113-59129451f35f" containerID="3c42726335cd97f7c97751f39a53a6a1c3ede5f0168be405b1c1c8f5ccc72a65" exitCode=0 Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.153813 4873 generic.go:334] "Generic (PLEG): container finished" podID="e037b85a-1abe-41da-a113-59129451f35f" containerID="0a629e4ae5a7dbd420903dd59abab426b5d7e8df23fe2344c827772936600ccb" exitCode=0 Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.153277 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerDied","Data":"3c42726335cd97f7c97751f39a53a6a1c3ede5f0168be405b1c1c8f5ccc72a65"} Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.153886 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerDied","Data":"0a629e4ae5a7dbd420903dd59abab426b5d7e8df23fe2344c827772936600ccb"} Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.327828 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.442833 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbx52\" (UniqueName: \"kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.442879 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.442915 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.442931 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.442991 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.443751 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.443831 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd\") pod \"e037b85a-1abe-41da-a113-59129451f35f\" (UID: \"e037b85a-1abe-41da-a113-59129451f35f\") " Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.444319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.444378 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.445356 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.445542 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e037b85a-1abe-41da-a113-59129451f35f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.449239 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts" (OuterVolumeSpecName: "scripts") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.452633 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52" (OuterVolumeSpecName: "kube-api-access-dbx52") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "kube-api-access-dbx52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.474445 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.529525 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.540995 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data" (OuterVolumeSpecName: "config-data") pod "e037b85a-1abe-41da-a113-59129451f35f" (UID: "e037b85a-1abe-41da-a113-59129451f35f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.547262 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.547286 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.547297 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbx52\" (UniqueName: \"kubernetes.io/projected/e037b85a-1abe-41da-a113-59129451f35f-kube-api-access-dbx52\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.547306 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:28 crc kubenswrapper[4873]: I0219 10:04:28.547314 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e037b85a-1abe-41da-a113-59129451f35f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.163604 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e037b85a-1abe-41da-a113-59129451f35f","Type":"ContainerDied","Data":"14e5ec3f770f34a8b125fd308c847df1b6b0992739a9c9f49371a6a91090b5cf"} Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.163652 4873 scope.go:117] "RemoveContainer" containerID="e34e712b4e7100f172666d402654e7bfd335c81523cf9222b376f6ba0a3e3352" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.163772 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.195792 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.198827 4873 scope.go:117] "RemoveContainer" containerID="6cc5365f6277dce3c3167a3985183e662248a1d4d6e0a45883e0d5e65cacc65f" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.211225 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.221906 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:29 crc kubenswrapper[4873]: E0219 10:04:29.222482 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-central-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222500 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-central-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: E0219 10:04:29.222515 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="sg-core" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222522 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="sg-core" Feb 19 10:04:29 crc kubenswrapper[4873]: E0219 10:04:29.222546 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-notification-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222554 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-notification-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: E0219 10:04:29.222573 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="proxy-httpd" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222582 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="proxy-httpd" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222822 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-central-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222846 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="proxy-httpd" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222858 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="ceilometer-notification-agent" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.222868 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e037b85a-1abe-41da-a113-59129451f35f" containerName="sg-core" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.224756 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.227451 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.227584 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.233511 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.266262 4873 scope.go:117] "RemoveContainer" containerID="3c42726335cd97f7c97751f39a53a6a1c3ede5f0168be405b1c1c8f5ccc72a65" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.286751 4873 scope.go:117] "RemoveContainer" containerID="0a629e4ae5a7dbd420903dd59abab426b5d7e8df23fe2344c827772936600ccb" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.360457 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.360709 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.360829 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.360930 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fsk\" (UniqueName: \"kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.361010 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.361147 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.361246 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463129 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463420 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463553 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463741 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fsk\" (UniqueName: \"kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463824 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.463945 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.464029 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.464474 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.464543 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.467815 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.468094 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.469224 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.470645 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.478458 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fsk\" (UniqueName: \"kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk\") pod \"ceilometer-0\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " pod="openstack/ceilometer-0" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.497342 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e037b85a-1abe-41da-a113-59129451f35f" path="/var/lib/kubelet/pods/e037b85a-1abe-41da-a113-59129451f35f/volumes" Feb 19 10:04:29 crc kubenswrapper[4873]: I0219 10:04:29.566873 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:30 crc kubenswrapper[4873]: I0219 10:04:30.012202 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:30 crc kubenswrapper[4873]: I0219 10:04:30.018667 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:04:30 crc kubenswrapper[4873]: I0219 10:04:30.174300 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerStarted","Data":"a10f8c50de08f100cd9a2d7823fcccf70bf26fcbe7aced261a97c5fa92b15f9b"} Feb 19 10:04:31 crc kubenswrapper[4873]: I0219 10:04:31.186246 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerStarted","Data":"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2"} Feb 19 10:04:31 crc kubenswrapper[4873]: I0219 10:04:31.186747 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerStarted","Data":"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3"} Feb 19 10:04:31 crc kubenswrapper[4873]: I0219 10:04:31.186758 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerStarted","Data":"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9"} Feb 19 10:04:33 crc kubenswrapper[4873]: I0219 10:04:33.207047 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerStarted","Data":"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a"} Feb 19 10:04:33 crc kubenswrapper[4873]: I0219 10:04:33.207626 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:33 crc kubenswrapper[4873]: I0219 10:04:33.246755 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.610104585 podStartE2EDuration="4.246734772s" podCreationTimestamp="2026-02-19 10:04:29 +0000 UTC" firstStartedPulling="2026-02-19 10:04:30.01845989 +0000 UTC m=+1179.307891528" lastFinishedPulling="2026-02-19 10:04:32.655090077 +0000 UTC m=+1181.944521715" observedRunningTime="2026-02-19 10:04:33.241741177 +0000 UTC m=+1182.531172815" watchObservedRunningTime="2026-02-19 10:04:33.246734772 +0000 UTC m=+1182.536166410" Feb 19 10:04:34 crc kubenswrapper[4873]: I0219 10:04:34.603630 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.093440 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xzlnz"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.095001 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.104653 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.104885 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.108744 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xzlnz"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.175201 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.175445 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6k84\" (UniqueName: \"kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.175595 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.175723 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.277563 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.277899 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6k84\" (UniqueName: \"kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.277968 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.277995 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.293435 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.293492 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.293895 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.303981 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6k84\" (UniqueName: \"kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84\") pod \"nova-cell0-cell-mapping-xzlnz\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.329165 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.330868 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.335027 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.367502 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.382572 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.382630 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.382662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.382717 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrln\" (UniqueName: \"kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.388376 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.394387 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.400533 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.416584 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.444661 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499594 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499656 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499681 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499734 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499770 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499788 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrrj\" (UniqueName: \"kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499816 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.499860 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnrln\" (UniqueName: \"kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.504554 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.507449 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.525588 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.533238 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnrln\" (UniqueName: \"kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln\") pod \"nova-api-0\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.561405 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.562848 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.591781 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.592148 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.672118 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.672237 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.672303 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.672505 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqrrj\" (UniqueName: \"kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.674650 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.676708 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.695513 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.697359 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.697620 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.698899 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.703161 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.785428 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqrrj\" (UniqueName: \"kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj\") pod \"nova-metadata-0\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " pod="openstack/nova-metadata-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.785553 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.787822 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802543 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrvh\" (UniqueName: \"kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802786 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmjb\" (UniqueName: \"kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802826 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802847 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802862 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802885 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802904 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802949 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802968 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.802993 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmp6\" (UniqueName: \"kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.803031 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.803046 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.815685 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.863214 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.903967 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.904001 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.904018 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.904040 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.904060 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.904078 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.909936 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.910018 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmp6\" (UniqueName: \"kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.910122 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.910143 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.910258 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrvh\" (UniqueName: \"kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.910326 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmjb\" (UniqueName: \"kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.911521 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.920166 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.940996 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.941448 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.941525 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.945400 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.946298 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:35 crc kubenswrapper[4873]: I0219 10:04:35.959345 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.083459 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmjb\" (UniqueName: \"kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.083962 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmp6\" (UniqueName: \"kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6\") pod \"dnsmasq-dns-59574c798f-md9g4\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.090603 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.095249 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.095574 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrvh\" (UniqueName: \"kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh\") pod \"nova-scheduler-0\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.102734 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.181526 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.284600 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.364715 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xzlnz"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.717224 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.805734 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9z5nq"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.807432 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.813572 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.813839 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.817346 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9z5nq"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.846336 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.846437 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48kf\" (UniqueName: \"kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.846558 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.846611 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.903590 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.935364 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.948265 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48kf\" (UniqueName: \"kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.948366 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.948405 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.948461 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.970095 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.972201 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.972849 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:36 crc kubenswrapper[4873]: I0219 10:04:36.974463 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48kf\" (UniqueName: \"kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf\") pod \"nova-cell1-conductor-db-sync-9z5nq\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.064210 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.094761 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.155283 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.291993 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerStarted","Data":"cd7174e87fb58baa828b8b7cbbe39dc5f4224bb9b672908e6aee18a4d7cb73c7"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.297611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerStarted","Data":"d76a4cd46cc684d689557d9372222e68498dc6bce68b7bb1dbfe2b38a2ff9d6a"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.306506 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xzlnz" event={"ID":"54b81c17-9130-4def-8021-e73168601bf6","Type":"ContainerStarted","Data":"c163e1fbae8dfb18e81d4177d941e04ca8d149e8d88a196ee094871f3dd31d8c"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.306549 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xzlnz" event={"ID":"54b81c17-9130-4def-8021-e73168601bf6","Type":"ContainerStarted","Data":"68ee08e86d5531761dadfdc7d5bd0ceb0e4ff127a8a81f8599f657d66df48dcd"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.319763 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"caad2a71-260f-41e5-99f0-532d73995f41","Type":"ContainerStarted","Data":"6286e92557ca766dfcf83501ac2e45ef3faaab8e9e1c247dd572c05c5d0518c2"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.332552 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xzlnz" podStartSLOduration=2.332383255 podStartE2EDuration="2.332383255s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:37.327439991 +0000 UTC m=+1186.616871629" watchObservedRunningTime="2026-02-19 10:04:37.332383255 +0000 UTC m=+1186.621814893" Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.342815 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"592b92b0-44a1-4386-8f2e-8a55633dedd8","Type":"ContainerStarted","Data":"dd5cbf40dbe04c7e636d3d6dd6dd491656051f5351076f90a0afd9cb6afdd2b3"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.353163 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59574c798f-md9g4" event={"ID":"561650f5-0705-4bab-903d-66bba11301ce","Type":"ContainerStarted","Data":"36949a05a228a205fbf13f7609b5591909a30f0981a5fef6bf17ca7a531f1283"} Feb 19 10:04:37 crc kubenswrapper[4873]: I0219 10:04:37.842550 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9z5nq"] Feb 19 10:04:38 crc kubenswrapper[4873]: I0219 10:04:38.389203 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" event={"ID":"96fca831-509a-4abd-bb7e-2c0f4704368b","Type":"ContainerStarted","Data":"fa528c871922eeb11614e9dc0af7c2459d78bffa25b2a2a86e1fa5e00eb6941c"} Feb 19 10:04:38 crc kubenswrapper[4873]: I0219 10:04:38.417324 4873 generic.go:334] "Generic (PLEG): container finished" podID="561650f5-0705-4bab-903d-66bba11301ce" containerID="0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f" exitCode=0 Feb 19 10:04:38 crc kubenswrapper[4873]: I0219 10:04:38.419583 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59574c798f-md9g4" event={"ID":"561650f5-0705-4bab-903d-66bba11301ce","Type":"ContainerDied","Data":"0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f"} Feb 19 10:04:39 crc kubenswrapper[4873]: I0219 10:04:39.120248 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:04:39 crc kubenswrapper[4873]: I0219 10:04:39.132507 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:39 crc kubenswrapper[4873]: I0219 10:04:39.427371 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" event={"ID":"96fca831-509a-4abd-bb7e-2c0f4704368b","Type":"ContainerStarted","Data":"343f5f5d97db66db1963a29b53ef93078842c1069756343de5aa869eb8885cd9"} Feb 19 10:04:39 crc kubenswrapper[4873]: I0219 10:04:39.447064 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" podStartSLOduration=3.447042819 podStartE2EDuration="3.447042819s" podCreationTimestamp="2026-02-19 10:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:39.442869454 +0000 UTC m=+1188.732301092" watchObservedRunningTime="2026-02-19 10:04:39.447042819 +0000 UTC m=+1188.736474457" Feb 19 10:04:40 crc kubenswrapper[4873]: I0219 10:04:40.438513 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59574c798f-md9g4" event={"ID":"561650f5-0705-4bab-903d-66bba11301ce","Type":"ContainerStarted","Data":"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548"} Feb 19 10:04:40 crc kubenswrapper[4873]: I0219 10:04:40.439009 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:40 crc kubenswrapper[4873]: I0219 10:04:40.467973 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59574c798f-md9g4" podStartSLOduration=5.467952669 podStartE2EDuration="5.467952669s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:40.460034411 +0000 UTC m=+1189.749466069" watchObservedRunningTime="2026-02-19 10:04:40.467952669 +0000 UTC m=+1189.757384307" Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.462670 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"592b92b0-44a1-4386-8f2e-8a55633dedd8","Type":"ContainerStarted","Data":"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.463024 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="592b92b0-44a1-4386-8f2e-8a55633dedd8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3" gracePeriod=30 Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.466329 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerStarted","Data":"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.466369 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerStarted","Data":"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.466476 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-log" containerID="cri-o://43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" gracePeriod=30 Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.466570 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-metadata" containerID="cri-o://e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" gracePeriod=30 Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.470092 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerStarted","Data":"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.470234 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerStarted","Data":"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.475657 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"caad2a71-260f-41e5-99f0-532d73995f41","Type":"ContainerStarted","Data":"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225"} Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.488985 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.283272375 podStartE2EDuration="7.488970111s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="2026-02-19 10:04:37.078636979 +0000 UTC m=+1186.368068617" lastFinishedPulling="2026-02-19 10:04:41.284334715 +0000 UTC m=+1190.573766353" observedRunningTime="2026-02-19 10:04:42.488627813 +0000 UTC m=+1191.778059451" watchObservedRunningTime="2026-02-19 10:04:42.488970111 +0000 UTC m=+1191.778401749" Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.509195 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.957040656 podStartE2EDuration="7.509179306s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="2026-02-19 10:04:36.72677169 +0000 UTC m=+1186.016203328" lastFinishedPulling="2026-02-19 10:04:41.27891032 +0000 UTC m=+1190.568341978" observedRunningTime="2026-02-19 10:04:42.5065378 +0000 UTC m=+1191.795969438" watchObservedRunningTime="2026-02-19 10:04:42.509179306 +0000 UTC m=+1191.798610944" Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.533336 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.174162597 podStartE2EDuration="7.533290499s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="2026-02-19 10:04:36.922933046 +0000 UTC m=+1186.212364684" lastFinishedPulling="2026-02-19 10:04:41.282060948 +0000 UTC m=+1190.571492586" observedRunningTime="2026-02-19 10:04:42.531342271 +0000 UTC m=+1191.820773909" watchObservedRunningTime="2026-02-19 10:04:42.533290499 +0000 UTC m=+1191.822722127" Feb 19 10:04:42 crc kubenswrapper[4873]: I0219 10:04:42.562487 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.215551942 podStartE2EDuration="7.562470309s" podCreationTimestamp="2026-02-19 10:04:35 +0000 UTC" firstStartedPulling="2026-02-19 10:04:36.931689025 +0000 UTC m=+1186.221120663" lastFinishedPulling="2026-02-19 10:04:41.278607392 +0000 UTC m=+1190.568039030" observedRunningTime="2026-02-19 10:04:42.553081984 +0000 UTC m=+1191.842513622" watchObservedRunningTime="2026-02-19 10:04:42.562470309 +0000 UTC m=+1191.851901947" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.113869 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.212753 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqrrj\" (UniqueName: \"kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj\") pod \"53e8439c-3afb-4cde-b758-58871323cb9d\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.212894 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle\") pod \"53e8439c-3afb-4cde-b758-58871323cb9d\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.212921 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data\") pod \"53e8439c-3afb-4cde-b758-58871323cb9d\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.213003 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs\") pod \"53e8439c-3afb-4cde-b758-58871323cb9d\" (UID: \"53e8439c-3afb-4cde-b758-58871323cb9d\") " Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.213800 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs" (OuterVolumeSpecName: "logs") pod "53e8439c-3afb-4cde-b758-58871323cb9d" (UID: "53e8439c-3afb-4cde-b758-58871323cb9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.219794 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj" (OuterVolumeSpecName: "kube-api-access-mqrrj") pod "53e8439c-3afb-4cde-b758-58871323cb9d" (UID: "53e8439c-3afb-4cde-b758-58871323cb9d"). InnerVolumeSpecName "kube-api-access-mqrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.245005 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53e8439c-3afb-4cde-b758-58871323cb9d" (UID: "53e8439c-3afb-4cde-b758-58871323cb9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.248637 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data" (OuterVolumeSpecName: "config-data") pod "53e8439c-3afb-4cde-b758-58871323cb9d" (UID: "53e8439c-3afb-4cde-b758-58871323cb9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.315349 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.315385 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e8439c-3afb-4cde-b758-58871323cb9d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.315394 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqrrj\" (UniqueName: \"kubernetes.io/projected/53e8439c-3afb-4cde-b758-58871323cb9d-kube-api-access-mqrrj\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.315404 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e8439c-3afb-4cde-b758-58871323cb9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.486782 4873 generic.go:334] "Generic (PLEG): container finished" podID="53e8439c-3afb-4cde-b758-58871323cb9d" containerID="e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" exitCode=0 Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.487070 4873 generic.go:334] "Generic (PLEG): container finished" podID="53e8439c-3afb-4cde-b758-58871323cb9d" containerID="43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" exitCode=143 Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.487664 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.504393 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerDied","Data":"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92"} Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.504440 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerDied","Data":"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3"} Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.504496 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53e8439c-3afb-4cde-b758-58871323cb9d","Type":"ContainerDied","Data":"cd7174e87fb58baa828b8b7cbbe39dc5f4224bb9b672908e6aee18a4d7cb73c7"} Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.504519 4873 scope.go:117] "RemoveContainer" containerID="e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.547176 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.570343 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.570945 4873 scope.go:117] "RemoveContainer" containerID="43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.581204 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:43 crc kubenswrapper[4873]: E0219 10:04:43.581600 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-log" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.581638 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-log" Feb 19 10:04:43 crc kubenswrapper[4873]: E0219 10:04:43.581656 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-metadata" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.581662 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-metadata" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.581839 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-log" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.581855 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" containerName="nova-metadata-metadata" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.582798 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.594493 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.594811 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.631078 4873 scope.go:117] "RemoveContainer" containerID="e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" Feb 19 10:04:43 crc kubenswrapper[4873]: E0219 10:04:43.631890 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92\": container with ID starting with e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92 not found: ID does not exist" containerID="e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.632159 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92"} err="failed to get container status \"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92\": rpc error: code = NotFound desc = could not find container \"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92\": container with ID starting with e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92 not found: ID does not exist" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.632196 4873 scope.go:117] "RemoveContainer" containerID="43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" Feb 19 10:04:43 crc kubenswrapper[4873]: E0219 10:04:43.633122 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3\": container with ID starting with 43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3 not found: ID does not exist" containerID="43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.633177 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3"} err="failed to get container status \"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3\": rpc error: code = NotFound desc = could not find container \"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3\": container with ID starting with 43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3 not found: ID does not exist" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.633211 4873 scope.go:117] "RemoveContainer" containerID="e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.633585 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92"} err="failed to get container status \"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92\": rpc error: code = NotFound desc = could not find container \"e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92\": container with ID starting with e47c982444f3b7e8df76455de59692aef41c0f5dc5a6d65d28ae042a8818af92 not found: ID does not exist" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.633630 4873 scope.go:117] "RemoveContainer" containerID="43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.634034 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3"} err="failed to get container status \"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3\": rpc error: code = NotFound desc = could not find container \"43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3\": container with ID starting with 43a616daef0cb7b8596108c40bf81ff6c3694874c913665dcb3bf8b7746c76c3 not found: ID does not exist" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.643399 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.732559 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkwd\" (UniqueName: \"kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.733149 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.733405 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.733477 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.733511 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.835359 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkwd\" (UniqueName: \"kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.835741 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.835785 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.835814 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.835836 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.836630 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.840572 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.840907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.851731 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.853144 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkwd\" (UniqueName: \"kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd\") pod \"nova-metadata-0\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " pod="openstack/nova-metadata-0" Feb 19 10:04:43 crc kubenswrapper[4873]: I0219 10:04:43.931724 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:44 crc kubenswrapper[4873]: I0219 10:04:44.423757 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:44 crc kubenswrapper[4873]: I0219 10:04:44.501962 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerStarted","Data":"ecd5ec7a0c35523bee927dcc16abb03decd8d6779d5d402a7592d31b466ab8a8"} Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.699952 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e8439c-3afb-4cde-b758-58871323cb9d" path="/var/lib/kubelet/pods/53e8439c-3afb-4cde-b758-58871323cb9d/volumes" Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.703640 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerStarted","Data":"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796"} Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.703683 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerStarted","Data":"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3"} Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.744909 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.744879645 podStartE2EDuration="2.744879645s" podCreationTimestamp="2026-02-19 10:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:45.735786607 +0000 UTC m=+1195.025218285" watchObservedRunningTime="2026-02-19 10:04:45.744879645 +0000 UTC m=+1195.034311293" Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.786697 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:04:45 crc kubenswrapper[4873]: I0219 10:04:45.786763 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.104490 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.104537 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.159429 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.185084 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.253190 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.253661 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="dnsmasq-dns" containerID="cri-o://89333ead7926aa2618b4a528f8056c909774aca95003b189eba1f9e1ae277295" gracePeriod=10 Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.286027 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.719541 4873 generic.go:334] "Generic (PLEG): container finished" podID="54b81c17-9130-4def-8021-e73168601bf6" containerID="c163e1fbae8dfb18e81d4177d941e04ca8d149e8d88a196ee094871f3dd31d8c" exitCode=0 Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.719619 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xzlnz" event={"ID":"54b81c17-9130-4def-8021-e73168601bf6","Type":"ContainerDied","Data":"c163e1fbae8dfb18e81d4177d941e04ca8d149e8d88a196ee094871f3dd31d8c"} Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.722400 4873 generic.go:334] "Generic (PLEG): container finished" podID="e78542dc-01da-47dc-aec5-a380b7484425" containerID="89333ead7926aa2618b4a528f8056c909774aca95003b189eba1f9e1ae277295" exitCode=0 Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.723298 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" event={"ID":"e78542dc-01da-47dc-aec5-a380b7484425","Type":"ContainerDied","Data":"89333ead7926aa2618b4a528f8056c909774aca95003b189eba1f9e1ae277295"} Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.767852 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.869557 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.869613 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.882359 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922402 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhw4\" (UniqueName: \"kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922436 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922524 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922611 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922629 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.922652 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb\") pod \"e78542dc-01da-47dc-aec5-a380b7484425\" (UID: \"e78542dc-01da-47dc-aec5-a380b7484425\") " Feb 19 10:04:46 crc kubenswrapper[4873]: I0219 10:04:46.941484 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4" (OuterVolumeSpecName: "kube-api-access-fwhw4") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "kube-api-access-fwhw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.002730 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.018648 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.021585 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.026765 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.026815 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.026830 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.026840 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhw4\" (UniqueName: \"kubernetes.io/projected/e78542dc-01da-47dc-aec5-a380b7484425-kube-api-access-fwhw4\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.053584 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.066522 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config" (OuterVolumeSpecName: "config") pod "e78542dc-01da-47dc-aec5-a380b7484425" (UID: "e78542dc-01da-47dc-aec5-a380b7484425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.129048 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.129091 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e78542dc-01da-47dc-aec5-a380b7484425-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.733817 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" event={"ID":"e78542dc-01da-47dc-aec5-a380b7484425","Type":"ContainerDied","Data":"c7ffc8e18883ae90270b9d4c0dcb813698f920dfda3376430f83575ac81ce7b9"} Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.733872 4873 scope.go:117] "RemoveContainer" containerID="89333ead7926aa2618b4a528f8056c909774aca95003b189eba1f9e1ae277295" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.734880 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8656fdbcc7-6lw5c" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.768215 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.768397 4873 scope.go:117] "RemoveContainer" containerID="20fe864189fb33810eb3acc7dc0b89314091b0776fb2a2bfe18804bc13374185" Feb 19 10:04:47 crc kubenswrapper[4873]: I0219 10:04:47.783905 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8656fdbcc7-6lw5c"] Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.231427 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.358389 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6k84\" (UniqueName: \"kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84\") pod \"54b81c17-9130-4def-8021-e73168601bf6\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.358531 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle\") pod \"54b81c17-9130-4def-8021-e73168601bf6\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.358695 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts\") pod \"54b81c17-9130-4def-8021-e73168601bf6\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.358759 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data\") pod \"54b81c17-9130-4def-8021-e73168601bf6\" (UID: \"54b81c17-9130-4def-8021-e73168601bf6\") " Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.368353 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts" (OuterVolumeSpecName: "scripts") pod "54b81c17-9130-4def-8021-e73168601bf6" (UID: "54b81c17-9130-4def-8021-e73168601bf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.368472 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84" (OuterVolumeSpecName: "kube-api-access-r6k84") pod "54b81c17-9130-4def-8021-e73168601bf6" (UID: "54b81c17-9130-4def-8021-e73168601bf6"). InnerVolumeSpecName "kube-api-access-r6k84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.391161 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54b81c17-9130-4def-8021-e73168601bf6" (UID: "54b81c17-9130-4def-8021-e73168601bf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.397270 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data" (OuterVolumeSpecName: "config-data") pod "54b81c17-9130-4def-8021-e73168601bf6" (UID: "54b81c17-9130-4def-8021-e73168601bf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.461252 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.461308 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6k84\" (UniqueName: \"kubernetes.io/projected/54b81c17-9130-4def-8021-e73168601bf6-kube-api-access-r6k84\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.461323 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.461334 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54b81c17-9130-4def-8021-e73168601bf6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.746002 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xzlnz" event={"ID":"54b81c17-9130-4def-8021-e73168601bf6","Type":"ContainerDied","Data":"68ee08e86d5531761dadfdc7d5bd0ceb0e4ff127a8a81f8599f657d66df48dcd"} Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.746029 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xzlnz" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.746045 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ee08e86d5531761dadfdc7d5bd0ceb0e4ff127a8a81f8599f657d66df48dcd" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.933048 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:04:48 crc kubenswrapper[4873]: I0219 10:04:48.933163 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.016366 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.016612 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-log" containerID="cri-o://bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde" gracePeriod=30 Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.016716 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-api" containerID="cri-o://b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49" gracePeriod=30 Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.031742 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.031979 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="caad2a71-260f-41e5-99f0-532d73995f41" containerName="nova-scheduler-scheduler" containerID="cri-o://c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" gracePeriod=30 Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.054529 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.496052 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78542dc-01da-47dc-aec5-a380b7484425" path="/var/lib/kubelet/pods/e78542dc-01da-47dc-aec5-a380b7484425/volumes" Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.761375 4873 generic.go:334] "Generic (PLEG): container finished" podID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerID="bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde" exitCode=143 Feb 19 10:04:49 crc kubenswrapper[4873]: I0219 10:04:49.762277 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerDied","Data":"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde"} Feb 19 10:04:50 crc kubenswrapper[4873]: I0219 10:04:50.771488 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-metadata" containerID="cri-o://0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" gracePeriod=30 Feb 19 10:04:50 crc kubenswrapper[4873]: I0219 10:04:50.771474 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-log" containerID="cri-o://db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" gracePeriod=30 Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.105195 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 is running failed: container process not found" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.105581 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 is running failed: container process not found" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.105941 4873 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 is running failed: container process not found" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.106009 4873 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="caad2a71-260f-41e5-99f0-532d73995f41" containerName="nova-scheduler-scheduler" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.446540 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.453962 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.625508 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnrvh\" (UniqueName: \"kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh\") pod \"caad2a71-260f-41e5-99f0-532d73995f41\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.625911 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data\") pod \"caad2a71-260f-41e5-99f0-532d73995f41\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626075 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle\") pod \"a77215ef-e683-4030-9b93-5f30814f1158\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626216 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knkwd\" (UniqueName: \"kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd\") pod \"a77215ef-e683-4030-9b93-5f30814f1158\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626365 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle\") pod \"caad2a71-260f-41e5-99f0-532d73995f41\" (UID: \"caad2a71-260f-41e5-99f0-532d73995f41\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626530 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs\") pod \"a77215ef-e683-4030-9b93-5f30814f1158\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626646 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs\") pod \"a77215ef-e683-4030-9b93-5f30814f1158\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.626775 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data\") pod \"a77215ef-e683-4030-9b93-5f30814f1158\" (UID: \"a77215ef-e683-4030-9b93-5f30814f1158\") " Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.627206 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs" (OuterVolumeSpecName: "logs") pod "a77215ef-e683-4030-9b93-5f30814f1158" (UID: "a77215ef-e683-4030-9b93-5f30814f1158"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.628186 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a77215ef-e683-4030-9b93-5f30814f1158-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.632345 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd" (OuterVolumeSpecName: "kube-api-access-knkwd") pod "a77215ef-e683-4030-9b93-5f30814f1158" (UID: "a77215ef-e683-4030-9b93-5f30814f1158"). InnerVolumeSpecName "kube-api-access-knkwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.632513 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh" (OuterVolumeSpecName: "kube-api-access-jnrvh") pod "caad2a71-260f-41e5-99f0-532d73995f41" (UID: "caad2a71-260f-41e5-99f0-532d73995f41"). InnerVolumeSpecName "kube-api-access-jnrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.655316 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data" (OuterVolumeSpecName: "config-data") pod "caad2a71-260f-41e5-99f0-532d73995f41" (UID: "caad2a71-260f-41e5-99f0-532d73995f41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.658452 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a77215ef-e683-4030-9b93-5f30814f1158" (UID: "a77215ef-e683-4030-9b93-5f30814f1158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.665728 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caad2a71-260f-41e5-99f0-532d73995f41" (UID: "caad2a71-260f-41e5-99f0-532d73995f41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.669578 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data" (OuterVolumeSpecName: "config-data") pod "a77215ef-e683-4030-9b93-5f30814f1158" (UID: "a77215ef-e683-4030-9b93-5f30814f1158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.698298 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a77215ef-e683-4030-9b93-5f30814f1158" (UID: "a77215ef-e683-4030-9b93-5f30814f1158"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730391 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnrvh\" (UniqueName: \"kubernetes.io/projected/caad2a71-260f-41e5-99f0-532d73995f41-kube-api-access-jnrvh\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730427 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730441 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730454 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knkwd\" (UniqueName: \"kubernetes.io/projected/a77215ef-e683-4030-9b93-5f30814f1158-kube-api-access-knkwd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730466 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caad2a71-260f-41e5-99f0-532d73995f41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730477 4873 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.730489 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77215ef-e683-4030-9b93-5f30814f1158-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.782166 4873 generic.go:334] "Generic (PLEG): container finished" podID="caad2a71-260f-41e5-99f0-532d73995f41" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" exitCode=0 Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.782245 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"caad2a71-260f-41e5-99f0-532d73995f41","Type":"ContainerDied","Data":"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225"} Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.782279 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"caad2a71-260f-41e5-99f0-532d73995f41","Type":"ContainerDied","Data":"6286e92557ca766dfcf83501ac2e45ef3faaab8e9e1c247dd572c05c5d0518c2"} Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.782300 4873 scope.go:117] "RemoveContainer" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.782423 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.792970 4873 generic.go:334] "Generic (PLEG): container finished" podID="a77215ef-e683-4030-9b93-5f30814f1158" containerID="0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" exitCode=0 Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.793008 4873 generic.go:334] "Generic (PLEG): container finished" podID="a77215ef-e683-4030-9b93-5f30814f1158" containerID="db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" exitCode=143 Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.793035 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerDied","Data":"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796"} Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.793066 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerDied","Data":"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3"} Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.793080 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a77215ef-e683-4030-9b93-5f30814f1158","Type":"ContainerDied","Data":"ecd5ec7a0c35523bee927dcc16abb03decd8d6779d5d402a7592d31b466ab8a8"} Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.793193 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.820554 4873 scope.go:117] "RemoveContainer" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.821342 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225\": container with ID starting with c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 not found: ID does not exist" containerID="c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.821379 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225"} err="failed to get container status \"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225\": rpc error: code = NotFound desc = could not find container \"c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225\": container with ID starting with c398c7bd4d96ee9c9aa015eb71d58185943f0386fe2115d9a2a7fe20b51f8225 not found: ID does not exist" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.821408 4873 scope.go:117] "RemoveContainer" containerID="0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.845459 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.861838 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.862443 4873 scope.go:117] "RemoveContainer" containerID="db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.899267 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.911049 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.918069 4873 scope.go:117] "RemoveContainer" containerID="0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.918548 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796\": container with ID starting with 0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796 not found: ID does not exist" containerID="0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.918579 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796"} err="failed to get container status \"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796\": rpc error: code = NotFound desc = could not find container \"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796\": container with ID starting with 0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796 not found: ID does not exist" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.918601 4873 scope.go:117] "RemoveContainer" containerID="db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.918997 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3\": container with ID starting with db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3 not found: ID does not exist" containerID="db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.919020 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3"} err="failed to get container status \"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3\": rpc error: code = NotFound desc = could not find container \"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3\": container with ID starting with db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3 not found: ID does not exist" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.919032 4873 scope.go:117] "RemoveContainer" containerID="0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.919247 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796"} err="failed to get container status \"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796\": rpc error: code = NotFound desc = could not find container \"0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796\": container with ID starting with 0116364fa1af8724e348bd59034de2a4b4e700ed9792921c6c61a583dae2d796 not found: ID does not exist" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.919260 4873 scope.go:117] "RemoveContainer" containerID="db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.919783 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3"} err="failed to get container status \"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3\": rpc error: code = NotFound desc = could not find container \"db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3\": container with ID starting with db7748b8444da78c681046bde136f47b14e67f4541a54edd5e6cba0baa6114e3 not found: ID does not exist" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924319 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924750 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b81c17-9130-4def-8021-e73168601bf6" containerName="nova-manage" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924768 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b81c17-9130-4def-8021-e73168601bf6" containerName="nova-manage" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924785 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="dnsmasq-dns" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924792 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="dnsmasq-dns" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924805 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="init" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924812 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="init" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924838 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caad2a71-260f-41e5-99f0-532d73995f41" containerName="nova-scheduler-scheduler" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924845 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="caad2a71-260f-41e5-99f0-532d73995f41" containerName="nova-scheduler-scheduler" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924882 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-metadata" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924887 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-metadata" Feb 19 10:04:51 crc kubenswrapper[4873]: E0219 10:04:51.924900 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-log" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.924906 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-log" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.925081 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78542dc-01da-47dc-aec5-a380b7484425" containerName="dnsmasq-dns" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.925095 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="caad2a71-260f-41e5-99f0-532d73995f41" containerName="nova-scheduler-scheduler" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.925127 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-log" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.925137 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b81c17-9130-4def-8021-e73168601bf6" containerName="nova-manage" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.925185 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77215ef-e683-4030-9b93-5f30814f1158" containerName="nova-metadata-metadata" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.932589 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.937855 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.977175 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.979394 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.981634 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.983143 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.985436 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:51 crc kubenswrapper[4873]: I0219 10:04:51.998410 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.054763 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4nvb\" (UniqueName: \"kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.057082 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.057163 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.158884 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.158963 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4nvb\" (UniqueName: \"kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159008 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159053 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159159 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159189 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttk6m\" (UniqueName: \"kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159212 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.159371 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.168480 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.181293 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.181626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4nvb\" (UniqueName: \"kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb\") pod \"nova-scheduler-0\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.261398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.262024 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.262082 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.262307 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.262377 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttk6m\" (UniqueName: \"kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.262405 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.266667 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.266702 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.267043 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.290610 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttk6m\" (UniqueName: \"kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m\") pod \"nova-metadata-0\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.329922 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.416028 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.426942 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.466208 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data\") pod \"a513081d-764b-47d8-85d8-7019d7ea92ca\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.466300 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs\") pod \"a513081d-764b-47d8-85d8-7019d7ea92ca\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.466339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrln\" (UniqueName: \"kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln\") pod \"a513081d-764b-47d8-85d8-7019d7ea92ca\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.466605 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle\") pod \"a513081d-764b-47d8-85d8-7019d7ea92ca\" (UID: \"a513081d-764b-47d8-85d8-7019d7ea92ca\") " Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.467297 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs" (OuterVolumeSpecName: "logs") pod "a513081d-764b-47d8-85d8-7019d7ea92ca" (UID: "a513081d-764b-47d8-85d8-7019d7ea92ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.470867 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln" (OuterVolumeSpecName: "kube-api-access-gnrln") pod "a513081d-764b-47d8-85d8-7019d7ea92ca" (UID: "a513081d-764b-47d8-85d8-7019d7ea92ca"). InnerVolumeSpecName "kube-api-access-gnrln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.504896 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a513081d-764b-47d8-85d8-7019d7ea92ca" (UID: "a513081d-764b-47d8-85d8-7019d7ea92ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.530338 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data" (OuterVolumeSpecName: "config-data") pod "a513081d-764b-47d8-85d8-7019d7ea92ca" (UID: "a513081d-764b-47d8-85d8-7019d7ea92ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.569486 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.569519 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a513081d-764b-47d8-85d8-7019d7ea92ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.569532 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a513081d-764b-47d8-85d8-7019d7ea92ca-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.569543 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnrln\" (UniqueName: \"kubernetes.io/projected/a513081d-764b-47d8-85d8-7019d7ea92ca-kube-api-access-gnrln\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.812271 4873 generic.go:334] "Generic (PLEG): container finished" podID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerID="b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49" exitCode=0 Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.812556 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerDied","Data":"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49"} Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.812586 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a513081d-764b-47d8-85d8-7019d7ea92ca","Type":"ContainerDied","Data":"d76a4cd46cc684d689557d9372222e68498dc6bce68b7bb1dbfe2b38a2ff9d6a"} Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.812607 4873 scope.go:117] "RemoveContainer" containerID="b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.812736 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.845359 4873 scope.go:117] "RemoveContainer" containerID="bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.856294 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.867573 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.877674 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: E0219 10:04:52.878225 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-api" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.878253 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-api" Feb 19 10:04:52 crc kubenswrapper[4873]: E0219 10:04:52.878280 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-log" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.878289 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-log" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.878560 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-log" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.878592 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" containerName="nova-api-api" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.879885 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.888170 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.893382 4873 scope.go:117] "RemoveContainer" containerID="b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.894031 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:04:52 crc kubenswrapper[4873]: E0219 10:04:52.896214 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49\": container with ID starting with b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49 not found: ID does not exist" containerID="b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.896264 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49"} err="failed to get container status \"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49\": rpc error: code = NotFound desc = could not find container \"b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49\": container with ID starting with b1b21af5cb273a16647acb4511e7255326ce000c69525f549d69868584813f49 not found: ID does not exist" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.896305 4873 scope.go:117] "RemoveContainer" containerID="bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde" Feb 19 10:04:52 crc kubenswrapper[4873]: E0219 10:04:52.898339 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde\": container with ID starting with bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde not found: ID does not exist" containerID="bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.898380 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde"} err="failed to get container status \"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde\": rpc error: code = NotFound desc = could not find container \"bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde\": container with ID starting with bed6311acd16b8356a4487f32ee541e984f5932f9c965fe2c30e30a6b4581fde not found: ID does not exist" Feb 19 10:04:52 crc kubenswrapper[4873]: W0219 10:04:52.911462 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8db35141_6a4c_41cb_8a70_c68ab32fb2fe.slice/crio-f055548cf56af3386b5caa98b53e876cf48bc03e71f26f63cd2f36d9a6f05688 WatchSource:0}: Error finding container f055548cf56af3386b5caa98b53e876cf48bc03e71f26f63cd2f36d9a6f05688: Status 404 returned error can't find the container with id f055548cf56af3386b5caa98b53e876cf48bc03e71f26f63cd2f36d9a6f05688 Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.925272 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.976983 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.977051 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.977301 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwjp\" (UniqueName: \"kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.977522 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:52 crc kubenswrapper[4873]: I0219 10:04:52.987717 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:04:52 crc kubenswrapper[4873]: W0219 10:04:52.992309 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab0c5b09_1134_4319_890d_8d42e916fc4c.slice/crio-bccf19aed991c39d7abd6ccba3455f58690ec6cd1d9a81513e1e6040784b81a5 WatchSource:0}: Error finding container bccf19aed991c39d7abd6ccba3455f58690ec6cd1d9a81513e1e6040784b81a5: Status 404 returned error can't find the container with id bccf19aed991c39d7abd6ccba3455f58690ec6cd1d9a81513e1e6040784b81a5 Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.080134 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.080187 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.080302 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwjp\" (UniqueName: \"kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.080606 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.081342 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.085554 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.086049 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.102233 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwjp\" (UniqueName: \"kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp\") pod \"nova-api-0\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.266144 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.497861 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a513081d-764b-47d8-85d8-7019d7ea92ca" path="/var/lib/kubelet/pods/a513081d-764b-47d8-85d8-7019d7ea92ca/volumes" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.498917 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77215ef-e683-4030-9b93-5f30814f1158" path="/var/lib/kubelet/pods/a77215ef-e683-4030-9b93-5f30814f1158/volumes" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.499562 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caad2a71-260f-41e5-99f0-532d73995f41" path="/var/lib/kubelet/pods/caad2a71-260f-41e5-99f0-532d73995f41/volumes" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.729375 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:04:53 crc kubenswrapper[4873]: W0219 10:04:53.739989 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2e87055_b0d9_4e47_9e2d_db14987e29c1.slice/crio-d550b23e94f3932a98e323b206b2513b13546a69a9e2a91f44f15350f1bda5ab WatchSource:0}: Error finding container d550b23e94f3932a98e323b206b2513b13546a69a9e2a91f44f15350f1bda5ab: Status 404 returned error can't find the container with id d550b23e94f3932a98e323b206b2513b13546a69a9e2a91f44f15350f1bda5ab Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.836165 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8db35141-6a4c-41cb-8a70-c68ab32fb2fe","Type":"ContainerStarted","Data":"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.836208 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8db35141-6a4c-41cb-8a70-c68ab32fb2fe","Type":"ContainerStarted","Data":"f055548cf56af3386b5caa98b53e876cf48bc03e71f26f63cd2f36d9a6f05688"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.839058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerStarted","Data":"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.839116 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerStarted","Data":"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.839126 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerStarted","Data":"bccf19aed991c39d7abd6ccba3455f58690ec6cd1d9a81513e1e6040784b81a5"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.841726 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerStarted","Data":"d550b23e94f3932a98e323b206b2513b13546a69a9e2a91f44f15350f1bda5ab"} Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.861894 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.861871464 podStartE2EDuration="2.861871464s" podCreationTimestamp="2026-02-19 10:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:53.852321936 +0000 UTC m=+1203.141753574" watchObservedRunningTime="2026-02-19 10:04:53.861871464 +0000 UTC m=+1203.151303102" Feb 19 10:04:53 crc kubenswrapper[4873]: I0219 10:04:53.874921 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.87490061 podStartE2EDuration="2.87490061s" podCreationTimestamp="2026-02-19 10:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:53.871198478 +0000 UTC m=+1203.160630116" watchObservedRunningTime="2026-02-19 10:04:53.87490061 +0000 UTC m=+1203.164332248" Feb 19 10:04:54 crc kubenswrapper[4873]: I0219 10:04:54.857322 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerStarted","Data":"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e"} Feb 19 10:04:54 crc kubenswrapper[4873]: I0219 10:04:54.858454 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerStarted","Data":"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2"} Feb 19 10:04:54 crc kubenswrapper[4873]: I0219 10:04:54.860524 4873 generic.go:334] "Generic (PLEG): container finished" podID="96fca831-509a-4abd-bb7e-2c0f4704368b" containerID="343f5f5d97db66db1963a29b53ef93078842c1069756343de5aa869eb8885cd9" exitCode=0 Feb 19 10:04:54 crc kubenswrapper[4873]: I0219 10:04:54.860638 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" event={"ID":"96fca831-509a-4abd-bb7e-2c0f4704368b","Type":"ContainerDied","Data":"343f5f5d97db66db1963a29b53ef93078842c1069756343de5aa869eb8885cd9"} Feb 19 10:04:54 crc kubenswrapper[4873]: I0219 10:04:54.908539 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.908511678 podStartE2EDuration="2.908511678s" podCreationTimestamp="2026-02-19 10:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:54.888168199 +0000 UTC m=+1204.177599857" watchObservedRunningTime="2026-02-19 10:04:54.908511678 +0000 UTC m=+1204.197943356" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.267047 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.271903 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data\") pod \"96fca831-509a-4abd-bb7e-2c0f4704368b\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.271997 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d48kf\" (UniqueName: \"kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf\") pod \"96fca831-509a-4abd-bb7e-2c0f4704368b\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.272113 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts\") pod \"96fca831-509a-4abd-bb7e-2c0f4704368b\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.272203 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle\") pod \"96fca831-509a-4abd-bb7e-2c0f4704368b\" (UID: \"96fca831-509a-4abd-bb7e-2c0f4704368b\") " Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.277934 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf" (OuterVolumeSpecName: "kube-api-access-d48kf") pod "96fca831-509a-4abd-bb7e-2c0f4704368b" (UID: "96fca831-509a-4abd-bb7e-2c0f4704368b"). InnerVolumeSpecName "kube-api-access-d48kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.280844 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts" (OuterVolumeSpecName: "scripts") pod "96fca831-509a-4abd-bb7e-2c0f4704368b" (UID: "96fca831-509a-4abd-bb7e-2c0f4704368b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.310257 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data" (OuterVolumeSpecName: "config-data") pod "96fca831-509a-4abd-bb7e-2c0f4704368b" (UID: "96fca831-509a-4abd-bb7e-2c0f4704368b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.312451 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96fca831-509a-4abd-bb7e-2c0f4704368b" (UID: "96fca831-509a-4abd-bb7e-2c0f4704368b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.373850 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.373893 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d48kf\" (UniqueName: \"kubernetes.io/projected/96fca831-509a-4abd-bb7e-2c0f4704368b-kube-api-access-d48kf\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.373907 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.373918 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fca831-509a-4abd-bb7e-2c0f4704368b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.884690 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" event={"ID":"96fca831-509a-4abd-bb7e-2c0f4704368b","Type":"ContainerDied","Data":"fa528c871922eeb11614e9dc0af7c2459d78bffa25b2a2a86e1fa5e00eb6941c"} Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.884733 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa528c871922eeb11614e9dc0af7c2459d78bffa25b2a2a86e1fa5e00eb6941c" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.884781 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9z5nq" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.977467 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:04:56 crc kubenswrapper[4873]: E0219 10:04:56.977859 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96fca831-509a-4abd-bb7e-2c0f4704368b" containerName="nova-cell1-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.977877 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fca831-509a-4abd-bb7e-2c0f4704368b" containerName="nova-cell1-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.978058 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="96fca831-509a-4abd-bb7e-2c0f4704368b" containerName="nova-cell1-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.978688 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.982262 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.985993 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.986125 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.986184 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgf4k\" (UniqueName: \"kubernetes.io/projected/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-kube-api-access-fgf4k\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4873]: I0219 10:04:56.993047 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.088316 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgf4k\" (UniqueName: \"kubernetes.io/projected/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-kube-api-access-fgf4k\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.088758 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.088943 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.096156 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.096735 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.105376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgf4k\" (UniqueName: \"kubernetes.io/projected/0688136a-f0b5-4a2a-8f08-9c99d9c3644c-kube-api-access-fgf4k\") pod \"nova-cell1-conductor-0\" (UID: \"0688136a-f0b5-4a2a-8f08-9c99d9c3644c\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.300878 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.420487 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.427273 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.427872 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.812667 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:04:57 crc kubenswrapper[4873]: W0219 10:04:57.813222 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0688136a_f0b5_4a2a_8f08_9c99d9c3644c.slice/crio-5c47344e502844640fdf428f96539358855e0d910280920c96a09e38b74656ca WatchSource:0}: Error finding container 5c47344e502844640fdf428f96539358855e0d910280920c96a09e38b74656ca: Status 404 returned error can't find the container with id 5c47344e502844640fdf428f96539358855e0d910280920c96a09e38b74656ca Feb 19 10:04:57 crc kubenswrapper[4873]: I0219 10:04:57.897025 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0688136a-f0b5-4a2a-8f08-9c99d9c3644c","Type":"ContainerStarted","Data":"5c47344e502844640fdf428f96539358855e0d910280920c96a09e38b74656ca"} Feb 19 10:04:58 crc kubenswrapper[4873]: I0219 10:04:58.910047 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0688136a-f0b5-4a2a-8f08-9c99d9c3644c","Type":"ContainerStarted","Data":"32a28b03f93d802720d86299515a8bd1793dbc1a6df341c0590c74dffa8f5aab"} Feb 19 10:04:58 crc kubenswrapper[4873]: I0219 10:04:58.910406 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 10:04:58 crc kubenswrapper[4873]: I0219 10:04:58.941156 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.941136976 podStartE2EDuration="2.941136976s" podCreationTimestamp="2026-02-19 10:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:58.92452411 +0000 UTC m=+1208.213955768" watchObservedRunningTime="2026-02-19 10:04:58.941136976 +0000 UTC m=+1208.230568614" Feb 19 10:04:59 crc kubenswrapper[4873]: I0219 10:04:59.631546 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:05:02 crc kubenswrapper[4873]: I0219 10:05:02.416857 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:05:02 crc kubenswrapper[4873]: I0219 10:05:02.428483 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:02 crc kubenswrapper[4873]: I0219 10:05:02.428541 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:02 crc kubenswrapper[4873]: I0219 10:05:02.443848 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:05:02 crc kubenswrapper[4873]: I0219 10:05:02.974433 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.266737 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.266788 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.289162 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.289440 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5224ec80-b354-467f-b660-2d22b9725be0" containerName="kube-state-metrics" containerID="cri-o://fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1" gracePeriod=30 Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.442274 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.442470 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.866267 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.959716 4873 generic.go:334] "Generic (PLEG): container finished" podID="5224ec80-b354-467f-b660-2d22b9725be0" containerID="fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1" exitCode=2 Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.959793 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.959833 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5224ec80-b354-467f-b660-2d22b9725be0","Type":"ContainerDied","Data":"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1"} Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.959861 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5224ec80-b354-467f-b660-2d22b9725be0","Type":"ContainerDied","Data":"e0fab87f6d902a58d41b4b35cef6645c9197dee8f59fc04defe1aac4065e472b"} Feb 19 10:05:03 crc kubenswrapper[4873]: I0219 10:05:03.959877 4873 scope.go:117] "RemoveContainer" containerID="fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.000732 4873 scope.go:117] "RemoveContainer" containerID="fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1" Feb 19 10:05:04 crc kubenswrapper[4873]: E0219 10:05:04.002395 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1\": container with ID starting with fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1 not found: ID does not exist" containerID="fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.002450 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1"} err="failed to get container status \"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1\": rpc error: code = NotFound desc = could not find container \"fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1\": container with ID starting with fff297dee24ce88f13715d9cd5435080bd7f84d8bee1670209f4a402d95507f1 not found: ID does not exist" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.022162 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fb6h\" (UniqueName: \"kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h\") pod \"5224ec80-b354-467f-b660-2d22b9725be0\" (UID: \"5224ec80-b354-467f-b660-2d22b9725be0\") " Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.031472 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h" (OuterVolumeSpecName: "kube-api-access-7fb6h") pod "5224ec80-b354-467f-b660-2d22b9725be0" (UID: "5224ec80-b354-467f-b660-2d22b9725be0"). InnerVolumeSpecName "kube-api-access-7fb6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.125220 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fb6h\" (UniqueName: \"kubernetes.io/projected/5224ec80-b354-467f-b660-2d22b9725be0-kube-api-access-7fb6h\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.298131 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.310639 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.326187 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:04 crc kubenswrapper[4873]: E0219 10:05:04.327719 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5224ec80-b354-467f-b660-2d22b9725be0" containerName="kube-state-metrics" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.327849 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5224ec80-b354-467f-b660-2d22b9725be0" containerName="kube-state-metrics" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.328799 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="5224ec80-b354-467f-b660-2d22b9725be0" containerName="kube-state-metrics" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.330077 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.343000 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.358052 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.361497 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.362069 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.378825 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.432316 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqjf8\" (UniqueName: \"kubernetes.io/projected/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-api-access-lqjf8\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.432367 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.432422 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.432462 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.534461 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqjf8\" (UniqueName: \"kubernetes.io/projected/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-api-access-lqjf8\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.534514 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.534589 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.534641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.539774 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.550165 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.552073 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.553329 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqjf8\" (UniqueName: \"kubernetes.io/projected/84c63c73-45f3-4d27-a3a3-cbfecd9e1810-kube-api-access-lqjf8\") pod \"kube-state-metrics-0\" (UID: \"84c63c73-45f3-4d27-a3a3-cbfecd9e1810\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:04 crc kubenswrapper[4873]: I0219 10:05:04.659625 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.126836 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.521698 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5224ec80-b354-467f-b660-2d22b9725be0" path="/var/lib/kubelet/pods/5224ec80-b354-467f-b660-2d22b9725be0/volumes" Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.850851 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.851241 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="sg-core" containerID="cri-o://090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2" gracePeriod=30 Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.851266 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-notification-agent" containerID="cri-o://6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3" gracePeriod=30 Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.851240 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="proxy-httpd" containerID="cri-o://08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a" gracePeriod=30 Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.851456 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-central-agent" containerID="cri-o://927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9" gracePeriod=30 Feb 19 10:05:05 crc kubenswrapper[4873]: I0219 10:05:05.980533 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84c63c73-45f3-4d27-a3a3-cbfecd9e1810","Type":"ContainerStarted","Data":"8a16ce75f567cbe15430ec901a55481ae1f2669fb715f58f63a09854ac4b36a8"} Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.993912 4873 generic.go:334] "Generic (PLEG): container finished" podID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerID="08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a" exitCode=0 Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.993953 4873 generic.go:334] "Generic (PLEG): container finished" podID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerID="090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2" exitCode=2 Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.993963 4873 generic.go:334] "Generic (PLEG): container finished" podID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerID="927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9" exitCode=0 Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.993977 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerDied","Data":"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a"} Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.994027 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerDied","Data":"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2"} Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.994041 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerDied","Data":"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9"} Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.995664 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84c63c73-45f3-4d27-a3a3-cbfecd9e1810","Type":"ContainerStarted","Data":"e9c6bfb09d4c220b8db81ffbf2c1166fe1e83c7922f9e171dae791088134bef5"} Feb 19 10:05:06 crc kubenswrapper[4873]: I0219 10:05:06.996030 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:05:07 crc kubenswrapper[4873]: I0219 10:05:07.019138 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.464656175 podStartE2EDuration="3.01911733s" podCreationTimestamp="2026-02-19 10:05:04 +0000 UTC" firstStartedPulling="2026-02-19 10:05:05.124713554 +0000 UTC m=+1214.414145202" lastFinishedPulling="2026-02-19 10:05:06.679174719 +0000 UTC m=+1215.968606357" observedRunningTime="2026-02-19 10:05:07.018066084 +0000 UTC m=+1216.307497722" watchObservedRunningTime="2026-02-19 10:05:07.01911733 +0000 UTC m=+1216.308548978" Feb 19 10:05:07 crc kubenswrapper[4873]: I0219 10:05:07.336147 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.790378 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934265 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934342 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934357 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934389 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8fsk\" (UniqueName: \"kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934451 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934489 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934577 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle\") pod \"aaaed141-d989-4b6f-ad3b-aefe0952c823\" (UID: \"aaaed141-d989-4b6f-ad3b-aefe0952c823\") " Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934683 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.934843 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.935483 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.935515 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaaed141-d989-4b6f-ad3b-aefe0952c823-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.947748 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts" (OuterVolumeSpecName: "scripts") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.947770 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk" (OuterVolumeSpecName: "kube-api-access-t8fsk") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "kube-api-access-t8fsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:08 crc kubenswrapper[4873]: I0219 10:05:08.961519 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.019018 4873 generic.go:334] "Generic (PLEG): container finished" podID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerID="6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3" exitCode=0 Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.019058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerDied","Data":"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3"} Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.019082 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaaed141-d989-4b6f-ad3b-aefe0952c823","Type":"ContainerDied","Data":"a10f8c50de08f100cd9a2d7823fcccf70bf26fcbe7aced261a97c5fa92b15f9b"} Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.019112 4873 scope.go:117] "RemoveContainer" containerID="08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.019233 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.023567 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.039648 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8fsk\" (UniqueName: \"kubernetes.io/projected/aaaed141-d989-4b6f-ad3b-aefe0952c823-kube-api-access-t8fsk\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.039677 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.039686 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.040778 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.042036 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data" (OuterVolumeSpecName: "config-data") pod "aaaed141-d989-4b6f-ad3b-aefe0952c823" (UID: "aaaed141-d989-4b6f-ad3b-aefe0952c823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.090708 4873 scope.go:117] "RemoveContainer" containerID="090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.126865 4873 scope.go:117] "RemoveContainer" containerID="6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.143169 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaaed141-d989-4b6f-ad3b-aefe0952c823-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.154853 4873 scope.go:117] "RemoveContainer" containerID="927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.175383 4873 scope.go:117] "RemoveContainer" containerID="08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.175793 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a\": container with ID starting with 08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a not found: ID does not exist" containerID="08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.175834 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a"} err="failed to get container status \"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a\": rpc error: code = NotFound desc = could not find container \"08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a\": container with ID starting with 08fd273370ab3603598065600f3bad166543e52db0dec0c4a5d168de0ad4347a not found: ID does not exist" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.175856 4873 scope.go:117] "RemoveContainer" containerID="090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.176084 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2\": container with ID starting with 090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2 not found: ID does not exist" containerID="090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.176135 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2"} err="failed to get container status \"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2\": rpc error: code = NotFound desc = could not find container \"090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2\": container with ID starting with 090a2539fe4dfb43105b70b0c0088e33de18f21996d9aa508978b202715db0e2 not found: ID does not exist" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.176155 4873 scope.go:117] "RemoveContainer" containerID="6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.176431 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3\": container with ID starting with 6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3 not found: ID does not exist" containerID="6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.176459 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3"} err="failed to get container status \"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3\": rpc error: code = NotFound desc = could not find container \"6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3\": container with ID starting with 6a6cd3259862f8346cd815b704d4c845d978984e7e26d6d40916439da8e278e3 not found: ID does not exist" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.176476 4873 scope.go:117] "RemoveContainer" containerID="927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.176759 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9\": container with ID starting with 927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9 not found: ID does not exist" containerID="927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.176785 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9"} err="failed to get container status \"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9\": rpc error: code = NotFound desc = could not find container \"927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9\": container with ID starting with 927d24780f007579d39d1cb2db522f7319926e96deda19a1ce5ec381489e91e9 not found: ID does not exist" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.359726 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.372367 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425114 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.425497 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-central-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425515 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-central-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.425540 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="sg-core" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425547 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="sg-core" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.425567 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-notification-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425573 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-notification-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: E0219 10:05:09.425589 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="proxy-httpd" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425595 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="proxy-httpd" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425818 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="sg-core" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425835 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="proxy-httpd" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425846 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-central-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.425857 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" containerName="ceilometer-notification-agent" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.427615 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.431762 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.431966 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.432076 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.442371 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.494034 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaaed141-d989-4b6f-ad3b-aefe0952c823" path="/var/lib/kubelet/pods/aaaed141-d989-4b6f-ad3b-aefe0952c823/volumes" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559143 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559219 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559258 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bj7n\" (UniqueName: \"kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559427 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559455 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559474 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.559499 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661458 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661522 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661559 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661585 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661611 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661691 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661729 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.661751 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bj7n\" (UniqueName: \"kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.662536 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.662709 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.665279 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.665279 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.666127 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.666854 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.676698 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.679290 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bj7n\" (UniqueName: \"kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n\") pod \"ceilometer-0\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " pod="openstack/ceilometer-0" Feb 19 10:05:09 crc kubenswrapper[4873]: I0219 10:05:09.787527 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:10 crc kubenswrapper[4873]: I0219 10:05:10.247544 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:11 crc kubenswrapper[4873]: I0219 10:05:11.041042 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerStarted","Data":"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2"} Feb 19 10:05:11 crc kubenswrapper[4873]: I0219 10:05:11.041383 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerStarted","Data":"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b"} Feb 19 10:05:11 crc kubenswrapper[4873]: I0219 10:05:11.041395 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerStarted","Data":"807e7166281c5f8f2d5afe5dddfd4f72b55225d08b8ebed491be486ce864d054"} Feb 19 10:05:12 crc kubenswrapper[4873]: I0219 10:05:12.053362 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerStarted","Data":"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4"} Feb 19 10:05:12 crc kubenswrapper[4873]: I0219 10:05:12.433740 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:12 crc kubenswrapper[4873]: I0219 10:05:12.435452 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:12 crc kubenswrapper[4873]: I0219 10:05:12.443505 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:12 crc kubenswrapper[4873]: I0219 10:05:12.933866 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.032697 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle\") pod \"592b92b0-44a1-4386-8f2e-8a55633dedd8\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.033084 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmjb\" (UniqueName: \"kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb\") pod \"592b92b0-44a1-4386-8f2e-8a55633dedd8\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.033225 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data\") pod \"592b92b0-44a1-4386-8f2e-8a55633dedd8\" (UID: \"592b92b0-44a1-4386-8f2e-8a55633dedd8\") " Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.038416 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb" (OuterVolumeSpecName: "kube-api-access-fcmjb") pod "592b92b0-44a1-4386-8f2e-8a55633dedd8" (UID: "592b92b0-44a1-4386-8f2e-8a55633dedd8"). InnerVolumeSpecName "kube-api-access-fcmjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.067304 4873 generic.go:334] "Generic (PLEG): container finished" podID="592b92b0-44a1-4386-8f2e-8a55633dedd8" containerID="d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3" exitCode=137 Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.067493 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"592b92b0-44a1-4386-8f2e-8a55633dedd8","Type":"ContainerDied","Data":"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3"} Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.067566 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"592b92b0-44a1-4386-8f2e-8a55633dedd8","Type":"ContainerDied","Data":"dd5cbf40dbe04c7e636d3d6dd6dd491656051f5351076f90a0afd9cb6afdd2b3"} Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.067592 4873 scope.go:117] "RemoveContainer" containerID="d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.069152 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.070750 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data" (OuterVolumeSpecName: "config-data") pod "592b92b0-44a1-4386-8f2e-8a55633dedd8" (UID: "592b92b0-44a1-4386-8f2e-8a55633dedd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.083571 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.093604 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "592b92b0-44a1-4386-8f2e-8a55633dedd8" (UID: "592b92b0-44a1-4386-8f2e-8a55633dedd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.137639 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.137674 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmjb\" (UniqueName: \"kubernetes.io/projected/592b92b0-44a1-4386-8f2e-8a55633dedd8-kube-api-access-fcmjb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.137686 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592b92b0-44a1-4386-8f2e-8a55633dedd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.174028 4873 scope.go:117] "RemoveContainer" containerID="d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3" Feb 19 10:05:13 crc kubenswrapper[4873]: E0219 10:05:13.174923 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3\": container with ID starting with d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3 not found: ID does not exist" containerID="d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.174969 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3"} err="failed to get container status \"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3\": rpc error: code = NotFound desc = could not find container \"d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3\": container with ID starting with d989940ed8c94822b97ee63d283f9c02be1cea0db1e5579461b45d4f18b376b3 not found: ID does not exist" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.273812 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.275069 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.278562 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.281032 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.416996 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.439449 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.452117 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:13 crc kubenswrapper[4873]: E0219 10:05:13.452556 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592b92b0-44a1-4386-8f2e-8a55633dedd8" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.452572 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="592b92b0-44a1-4386-8f2e-8a55633dedd8" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.452747 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="592b92b0-44a1-4386-8f2e-8a55633dedd8" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.453393 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.456495 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.456699 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.456755 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.460487 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.496064 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592b92b0-44a1-4386-8f2e-8a55633dedd8" path="/var/lib/kubelet/pods/592b92b0-44a1-4386-8f2e-8a55633dedd8/volumes" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.545135 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.545300 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8mw\" (UniqueName: \"kubernetes.io/projected/cf46452a-f49d-48ab-a235-9e96f89c931f-kube-api-access-nv8mw\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.545384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.545662 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.545749 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.648250 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.648312 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.648364 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.648440 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8mw\" (UniqueName: \"kubernetes.io/projected/cf46452a-f49d-48ab-a235-9e96f89c931f-kube-api-access-nv8mw\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.648518 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.652991 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.653647 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.653708 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.655671 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf46452a-f49d-48ab-a235-9e96f89c931f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.666180 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8mw\" (UniqueName: \"kubernetes.io/projected/cf46452a-f49d-48ab-a235-9e96f89c931f-kube-api-access-nv8mw\") pod \"nova-cell1-novncproxy-0\" (UID: \"cf46452a-f49d-48ab-a235-9e96f89c931f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:13 crc kubenswrapper[4873]: I0219 10:05:13.790447 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.085337 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerStarted","Data":"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a"} Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.086172 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.087629 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.095925 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.143346 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.093480112 podStartE2EDuration="5.143325072s" podCreationTimestamp="2026-02-19 10:05:09 +0000 UTC" firstStartedPulling="2026-02-19 10:05:10.2552757 +0000 UTC m=+1219.544707338" lastFinishedPulling="2026-02-19 10:05:13.30512066 +0000 UTC m=+1222.594552298" observedRunningTime="2026-02-19 10:05:14.115015714 +0000 UTC m=+1223.404447352" watchObservedRunningTime="2026-02-19 10:05:14.143325072 +0000 UTC m=+1223.432756710" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.294240 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.296790 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.333871 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:05:14 crc kubenswrapper[4873]: W0219 10:05:14.335702 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf46452a_f49d_48ab_a235_9e96f89c931f.slice/crio-9418b54ccc328fe85e923e683a72e777da925d9c459b57bd508915bb11e8a16c WatchSource:0}: Error finding container 9418b54ccc328fe85e923e683a72e777da925d9c459b57bd508915bb11e8a16c: Status 404 returned error can't find the container with id 9418b54ccc328fe85e923e683a72e777da925d9c459b57bd508915bb11e8a16c Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.364210 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469589 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469695 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469768 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469833 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9vs\" (UniqueName: \"kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469866 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.469892 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.571752 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.571852 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9vs\" (UniqueName: \"kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.571881 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.571914 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.572001 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.572155 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.573129 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.573237 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.573897 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.574270 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.574406 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.594007 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9vs\" (UniqueName: \"kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs\") pod \"dnsmasq-dns-78d65dbfc-jjvbb\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.623011 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:14 crc kubenswrapper[4873]: I0219 10:05:14.682546 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:05:15 crc kubenswrapper[4873]: I0219 10:05:15.095320 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cf46452a-f49d-48ab-a235-9e96f89c931f","Type":"ContainerStarted","Data":"38a104eb659238745f063c24e19ef9f6376fe94feac7415bb8767d07f2a7a77b"} Feb 19 10:05:15 crc kubenswrapper[4873]: I0219 10:05:15.095826 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cf46452a-f49d-48ab-a235-9e96f89c931f","Type":"ContainerStarted","Data":"9418b54ccc328fe85e923e683a72e777da925d9c459b57bd508915bb11e8a16c"} Feb 19 10:05:15 crc kubenswrapper[4873]: I0219 10:05:15.116368 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.1163519060000002 podStartE2EDuration="2.116351906s" podCreationTimestamp="2026-02-19 10:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:15.113937906 +0000 UTC m=+1224.403369544" watchObservedRunningTime="2026-02-19 10:05:15.116351906 +0000 UTC m=+1224.405783534" Feb 19 10:05:15 crc kubenswrapper[4873]: I0219 10:05:15.144148 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:05:15 crc kubenswrapper[4873]: W0219 10:05:15.148151 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fe74544_e8af_45bd_9193_2b247c5e002b.slice/crio-bc3e27b7897877cb015837d9764a384c7709da0f91ae1099d9eccef91911ed99 WatchSource:0}: Error finding container bc3e27b7897877cb015837d9764a384c7709da0f91ae1099d9eccef91911ed99: Status 404 returned error can't find the container with id bc3e27b7897877cb015837d9764a384c7709da0f91ae1099d9eccef91911ed99 Feb 19 10:05:16 crc kubenswrapper[4873]: I0219 10:05:16.104493 4873 generic.go:334] "Generic (PLEG): container finished" podID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerID="3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d" exitCode=0 Feb 19 10:05:16 crc kubenswrapper[4873]: I0219 10:05:16.106384 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" event={"ID":"7fe74544-e8af-45bd-9193-2b247c5e002b","Type":"ContainerDied","Data":"3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d"} Feb 19 10:05:16 crc kubenswrapper[4873]: I0219 10:05:16.106417 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" event={"ID":"7fe74544-e8af-45bd-9193-2b247c5e002b","Type":"ContainerStarted","Data":"bc3e27b7897877cb015837d9764a384c7709da0f91ae1099d9eccef91911ed99"} Feb 19 10:05:16 crc kubenswrapper[4873]: I0219 10:05:16.529697 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:17 crc kubenswrapper[4873]: I0219 10:05:17.116589 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" event={"ID":"7fe74544-e8af-45bd-9193-2b247c5e002b","Type":"ContainerStarted","Data":"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e"} Feb 19 10:05:17 crc kubenswrapper[4873]: I0219 10:05:17.116736 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-log" containerID="cri-o://f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2" gracePeriod=30 Feb 19 10:05:17 crc kubenswrapper[4873]: I0219 10:05:17.116876 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-api" containerID="cri-o://2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e" gracePeriod=30 Feb 19 10:05:17 crc kubenswrapper[4873]: I0219 10:05:17.196690 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" podStartSLOduration=3.19667071 podStartE2EDuration="3.19667071s" podCreationTimestamp="2026-02-19 10:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:17.174514776 +0000 UTC m=+1226.463946414" watchObservedRunningTime="2026-02-19 10:05:17.19667071 +0000 UTC m=+1226.486102348" Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.133488 4873 generic.go:334] "Generic (PLEG): container finished" podID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerID="f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2" exitCode=143 Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.134391 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerDied","Data":"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2"} Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.134419 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.564507 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.564980 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-central-agent" containerID="cri-o://8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b" gracePeriod=30 Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.565028 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="sg-core" containerID="cri-o://d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4" gracePeriod=30 Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.565081 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-notification-agent" containerID="cri-o://e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2" gracePeriod=30 Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.565091 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="proxy-httpd" containerID="cri-o://281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a" gracePeriod=30 Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.792718 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.853428 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.987680 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs\") pod \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.987731 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbwjp\" (UniqueName: \"kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp\") pod \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.987835 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data\") pod \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.987922 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle\") pod \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\" (UID: \"d2e87055-b0d9-4e47-9e2d-db14987e29c1\") " Feb 19 10:05:18 crc kubenswrapper[4873]: I0219 10:05:18.994992 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs" (OuterVolumeSpecName: "logs") pod "d2e87055-b0d9-4e47-9e2d-db14987e29c1" (UID: "d2e87055-b0d9-4e47-9e2d-db14987e29c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:18.999340 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp" (OuterVolumeSpecName: "kube-api-access-jbwjp") pod "d2e87055-b0d9-4e47-9e2d-db14987e29c1" (UID: "d2e87055-b0d9-4e47-9e2d-db14987e29c1"). InnerVolumeSpecName "kube-api-access-jbwjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.035526 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2e87055-b0d9-4e47-9e2d-db14987e29c1" (UID: "d2e87055-b0d9-4e47-9e2d-db14987e29c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.045655 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data" (OuterVolumeSpecName: "config-data") pod "d2e87055-b0d9-4e47-9e2d-db14987e29c1" (UID: "d2e87055-b0d9-4e47-9e2d-db14987e29c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.090023 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.090061 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e87055-b0d9-4e47-9e2d-db14987e29c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.090071 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2e87055-b0d9-4e47-9e2d-db14987e29c1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.090079 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbwjp\" (UniqueName: \"kubernetes.io/projected/d2e87055-b0d9-4e47-9e2d-db14987e29c1-kube-api-access-jbwjp\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.145816 4873 generic.go:334] "Generic (PLEG): container finished" podID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerID="281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a" exitCode=0 Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.145865 4873 generic.go:334] "Generic (PLEG): container finished" podID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerID="d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4" exitCode=2 Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.145890 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerDied","Data":"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a"} Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.145938 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerDied","Data":"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4"} Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.148468 4873 generic.go:334] "Generic (PLEG): container finished" podID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerID="2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e" exitCode=0 Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.148551 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.148559 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerDied","Data":"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e"} Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.148594 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2e87055-b0d9-4e47-9e2d-db14987e29c1","Type":"ContainerDied","Data":"d550b23e94f3932a98e323b206b2513b13546a69a9e2a91f44f15350f1bda5ab"} Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.148614 4873 scope.go:117] "RemoveContainer" containerID="2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.174130 4873 scope.go:117] "RemoveContainer" containerID="f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.193945 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.199068 4873 scope.go:117] "RemoveContainer" containerID="2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e" Feb 19 10:05:19 crc kubenswrapper[4873]: E0219 10:05:19.199558 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e\": container with ID starting with 2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e not found: ID does not exist" containerID="2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.199598 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e"} err="failed to get container status \"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e\": rpc error: code = NotFound desc = could not find container \"2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e\": container with ID starting with 2ca3a3ad7f9ca37295ebb7a316ce7b89e3162d36b731b68e154c3c09cba2fb3e not found: ID does not exist" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.199624 4873 scope.go:117] "RemoveContainer" containerID="f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2" Feb 19 10:05:19 crc kubenswrapper[4873]: E0219 10:05:19.199917 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2\": container with ID starting with f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2 not found: ID does not exist" containerID="f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.199963 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2"} err="failed to get container status \"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2\": rpc error: code = NotFound desc = could not find container \"f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2\": container with ID starting with f6c6a093d08b34db5340df77d6fafcc6cae16deeb5b24238c9aa4d6a0671d3c2 not found: ID does not exist" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.203549 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.225420 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:19 crc kubenswrapper[4873]: E0219 10:05:19.225867 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-api" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.225888 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-api" Feb 19 10:05:19 crc kubenswrapper[4873]: E0219 10:05:19.225926 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-log" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.225935 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-log" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.226241 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-log" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.226264 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" containerName="nova-api-api" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.227410 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.230159 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.230380 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.230653 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.234311 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.395515 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.395605 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.395647 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.395775 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.395891 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.396050 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjk8d\" (UniqueName: \"kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.495974 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e87055-b0d9-4e47-9e2d-db14987e29c1" path="/var/lib/kubelet/pods/d2e87055-b0d9-4e47-9e2d-db14987e29c1/volumes" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497630 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497684 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjk8d\" (UniqueName: \"kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497742 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497789 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497819 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.497891 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.498071 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.504299 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.504296 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.504443 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.504525 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.515363 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjk8d\" (UniqueName: \"kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d\") pod \"nova-api-0\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " pod="openstack/nova-api-0" Feb 19 10:05:19 crc kubenswrapper[4873]: I0219 10:05:19.545062 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.022594 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:20 crc kubenswrapper[4873]: W0219 10:05:20.025236 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod772434a9_08d5_499e_9ea1_e9ed0cc1e1b6.slice/crio-11443c61c72c773d1b9e2f4dbdc5cf11081029e2fa850f7683c10c10c8471883 WatchSource:0}: Error finding container 11443c61c72c773d1b9e2f4dbdc5cf11081029e2fa850f7683c10c10c8471883: Status 404 returned error can't find the container with id 11443c61c72c773d1b9e2f4dbdc5cf11081029e2fa850f7683c10c10c8471883 Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.159842 4873 generic.go:334] "Generic (PLEG): container finished" podID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerID="e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2" exitCode=0 Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.159912 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerDied","Data":"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2"} Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.164620 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerStarted","Data":"11443c61c72c773d1b9e2f4dbdc5cf11081029e2fa850f7683c10c10c8471883"} Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.674905 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851514 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851559 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851670 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851765 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851799 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851834 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851876 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bj7n\" (UniqueName: \"kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.851895 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd\") pod \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\" (UID: \"9c767ece-c345-4a24-93b3-3e7e3f662e0f\") " Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.852599 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.852705 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.856692 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n" (OuterVolumeSpecName: "kube-api-access-6bj7n") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "kube-api-access-6bj7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.869837 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts" (OuterVolumeSpecName: "scripts") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.904931 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.920398 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.946376 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954448 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bj7n\" (UniqueName: \"kubernetes.io/projected/9c767ece-c345-4a24-93b3-3e7e3f662e0f-kube-api-access-6bj7n\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954476 4873 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954487 4873 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954496 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954505 4873 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c767ece-c345-4a24-93b3-3e7e3f662e0f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954514 4873 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.954521 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:20 crc kubenswrapper[4873]: I0219 10:05:20.988607 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data" (OuterVolumeSpecName: "config-data") pod "9c767ece-c345-4a24-93b3-3e7e3f662e0f" (UID: "9c767ece-c345-4a24-93b3-3e7e3f662e0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.056631 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c767ece-c345-4a24-93b3-3e7e3f662e0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.179325 4873 generic.go:334] "Generic (PLEG): container finished" podID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerID="8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b" exitCode=0 Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.179440 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerDied","Data":"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b"} Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.179507 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c767ece-c345-4a24-93b3-3e7e3f662e0f","Type":"ContainerDied","Data":"807e7166281c5f8f2d5afe5dddfd4f72b55225d08b8ebed491be486ce864d054"} Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.179477 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.179532 4873 scope.go:117] "RemoveContainer" containerID="281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.184597 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerStarted","Data":"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb"} Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.184633 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerStarted","Data":"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74"} Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.207574 4873 scope.go:117] "RemoveContainer" containerID="d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.236665 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.236648732 podStartE2EDuration="2.236648732s" podCreationTimestamp="2026-02-19 10:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:21.214351984 +0000 UTC m=+1230.503783622" watchObservedRunningTime="2026-02-19 10:05:21.236648732 +0000 UTC m=+1230.526080370" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.240520 4873 scope.go:117] "RemoveContainer" containerID="e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.249194 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.262517 4873 scope.go:117] "RemoveContainer" containerID="8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.264987 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.277562 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.278027 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-central-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278047 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-central-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.278058 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-notification-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278064 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-notification-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.278076 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="proxy-httpd" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278083 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="proxy-httpd" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.278118 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="sg-core" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278125 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="sg-core" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278329 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="sg-core" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278354 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="proxy-httpd" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278371 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-notification-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.278384 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" containerName="ceilometer-central-agent" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.280438 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.283511 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.284017 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.284263 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.285588 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.298584 4873 scope.go:117] "RemoveContainer" containerID="281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.300828 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a\": container with ID starting with 281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a not found: ID does not exist" containerID="281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.300954 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a"} err="failed to get container status \"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a\": rpc error: code = NotFound desc = could not find container \"281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a\": container with ID starting with 281639fdf7e8eb6f014e8f87ef246ca8732e03adc6b6d58aee5acdcf818ca43a not found: ID does not exist" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.301082 4873 scope.go:117] "RemoveContainer" containerID="d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.301571 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4\": container with ID starting with d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4 not found: ID does not exist" containerID="d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.301692 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4"} err="failed to get container status \"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4\": rpc error: code = NotFound desc = could not find container \"d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4\": container with ID starting with d7bfa040f5b56903732b37fc818d6022980d779f6d081559594788050a432fc4 not found: ID does not exist" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.301801 4873 scope.go:117] "RemoveContainer" containerID="e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.302958 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2\": container with ID starting with e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2 not found: ID does not exist" containerID="e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.302993 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2"} err="failed to get container status \"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2\": rpc error: code = NotFound desc = could not find container \"e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2\": container with ID starting with e233046154b94937075780e18db17e39bb296d5ab4b60baa69e90a853f3a5ed2 not found: ID does not exist" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.303014 4873 scope.go:117] "RemoveContainer" containerID="8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.303310 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b\": container with ID starting with 8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b not found: ID does not exist" containerID="8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.303332 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b"} err="failed to get container status \"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b\": rpc error: code = NotFound desc = could not find container \"8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b\": container with ID starting with 8a41ac2def08f14d8da4168d8f19aa2fabc8acb436044297b419855d749d3f0b not found: ID does not exist" Feb 19 10:05:21 crc kubenswrapper[4873]: E0219 10:05:21.338969 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c767ece_c345_4a24_93b3_3e7e3f662e0f.slice/crio-807e7166281c5f8f2d5afe5dddfd4f72b55225d08b8ebed491be486ce864d054\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c767ece_c345_4a24_93b3_3e7e3f662e0f.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464308 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464369 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-config-data\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464499 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-scripts\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464591 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnpw\" (UniqueName: \"kubernetes.io/projected/e432fa6f-daf1-4f3a-9f84-ac9495956013-kube-api-access-zsnpw\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464658 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-run-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464701 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.464814 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-log-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.495618 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c767ece-c345-4a24-93b3-3e7e3f662e0f" path="/var/lib/kubelet/pods/9c767ece-c345-4a24-93b3-3e7e3f662e0f/volumes" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.566713 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567012 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-config-data\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567154 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567296 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-scripts\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567473 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsnpw\" (UniqueName: \"kubernetes.io/projected/e432fa6f-daf1-4f3a-9f84-ac9495956013-kube-api-access-zsnpw\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567615 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-run-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567737 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.567919 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-log-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.568062 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-run-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.568294 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e432fa6f-daf1-4f3a-9f84-ac9495956013-log-httpd\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.572650 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-scripts\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.573092 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.573334 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.576564 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-config-data\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.585657 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e432fa6f-daf1-4f3a-9f84-ac9495956013-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.590418 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsnpw\" (UniqueName: \"kubernetes.io/projected/e432fa6f-daf1-4f3a-9f84-ac9495956013-kube-api-access-zsnpw\") pod \"ceilometer-0\" (UID: \"e432fa6f-daf1-4f3a-9f84-ac9495956013\") " pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4873]: I0219 10:05:21.611675 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:22 crc kubenswrapper[4873]: I0219 10:05:22.055250 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:22 crc kubenswrapper[4873]: I0219 10:05:22.195371 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e432fa6f-daf1-4f3a-9f84-ac9495956013","Type":"ContainerStarted","Data":"32f14b35e516cbeb39793d746a4e9965aee26efb297a23939ab0efa460eb0612"} Feb 19 10:05:23 crc kubenswrapper[4873]: I0219 10:05:23.217176 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e432fa6f-daf1-4f3a-9f84-ac9495956013","Type":"ContainerStarted","Data":"2c9d998caf8ae31c8232e87ab3fa1eec56a6bad66d729371c7c83b4330762ea7"} Feb 19 10:05:23 crc kubenswrapper[4873]: I0219 10:05:23.217568 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e432fa6f-daf1-4f3a-9f84-ac9495956013","Type":"ContainerStarted","Data":"8e11799af3f86e6caa6a0799f6934ce14cb9f68c5589900c4a2b781684607904"} Feb 19 10:05:23 crc kubenswrapper[4873]: I0219 10:05:23.791169 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:23 crc kubenswrapper[4873]: I0219 10:05:23.812849 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.228714 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e432fa6f-daf1-4f3a-9f84-ac9495956013","Type":"ContainerStarted","Data":"847a37be053bbe33f84e36493db72679bb67bf08105ea8c34f1e3410b1da9a52"} Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.248218 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.408597 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ljn4d"] Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.410020 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.412468 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.413291 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.421452 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ljn4d"] Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.525742 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.526225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.526391 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.526673 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fnt9\" (UniqueName: \"kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.624347 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.628203 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.628283 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.628318 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.628394 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fnt9\" (UniqueName: \"kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.634574 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.634838 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.635917 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.662306 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fnt9\" (UniqueName: \"kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9\") pod \"nova-cell1-cell-mapping-ljn4d\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.700863 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.701202 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59574c798f-md9g4" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="dnsmasq-dns" containerID="cri-o://5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548" gracePeriod=10 Feb 19 10:05:24 crc kubenswrapper[4873]: I0219 10:05:24.737716 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.235328 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.246369 4873 generic.go:334] "Generic (PLEG): container finished" podID="561650f5-0705-4bab-903d-66bba11301ce" containerID="5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548" exitCode=0 Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.246485 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59574c798f-md9g4" event={"ID":"561650f5-0705-4bab-903d-66bba11301ce","Type":"ContainerDied","Data":"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548"} Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.246536 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59574c798f-md9g4" event={"ID":"561650f5-0705-4bab-903d-66bba11301ce","Type":"ContainerDied","Data":"36949a05a228a205fbf13f7609b5591909a30f0981a5fef6bf17ca7a531f1283"} Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.246563 4873 scope.go:117] "RemoveContainer" containerID="5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.246623 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59574c798f-md9g4" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.251538 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e432fa6f-daf1-4f3a-9f84-ac9495956013","Type":"ContainerStarted","Data":"85324d9429eb66e2f90861b513b4788e18de0a88a406595d3735ff1130f70689"} Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.279793 4873 scope.go:117] "RemoveContainer" containerID="0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.319980 4873 scope.go:117] "RemoveContainer" containerID="5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548" Feb 19 10:05:25 crc kubenswrapper[4873]: E0219 10:05:25.320676 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548\": container with ID starting with 5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548 not found: ID does not exist" containerID="5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.320722 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548"} err="failed to get container status \"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548\": rpc error: code = NotFound desc = could not find container \"5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548\": container with ID starting with 5fbace2c800a52592460c743049ca978bd11128cb3460f233a4ff016b5877548 not found: ID does not exist" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.320745 4873 scope.go:117] "RemoveContainer" containerID="0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f" Feb 19 10:05:25 crc kubenswrapper[4873]: E0219 10:05:25.321194 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f\": container with ID starting with 0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f not found: ID does not exist" containerID="0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.321210 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f"} err="failed to get container status \"0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f\": rpc error: code = NotFound desc = could not find container \"0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f\": container with ID starting with 0f9b9b7ee4ce408ec602f5938ab243185a9595459642fa3aa668d6fc66f1980f not found: ID does not exist" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356686 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356773 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356910 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmmp6\" (UniqueName: \"kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356935 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356959 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.356986 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc\") pod \"561650f5-0705-4bab-903d-66bba11301ce\" (UID: \"561650f5-0705-4bab-903d-66bba11301ce\") " Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.365426 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6" (OuterVolumeSpecName: "kube-api-access-wmmp6") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "kube-api-access-wmmp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.412066 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.417731 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.421526 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.423750 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config" (OuterVolumeSpecName: "config") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.425091 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ljn4d"] Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.444507 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "561650f5-0705-4bab-903d-66bba11301ce" (UID: "561650f5-0705-4bab-903d-66bba11301ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459396 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459428 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459440 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmmp6\" (UniqueName: \"kubernetes.io/projected/561650f5-0705-4bab-903d-66bba11301ce-kube-api-access-wmmp6\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459455 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459464 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.459471 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/561650f5-0705-4bab-903d-66bba11301ce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.581819 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:05:25 crc kubenswrapper[4873]: I0219 10:05:25.592721 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59574c798f-md9g4"] Feb 19 10:05:26 crc kubenswrapper[4873]: I0219 10:05:26.275034 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ljn4d" event={"ID":"355c3bd2-5fb4-4a28-be15-e766b61eeed9","Type":"ContainerStarted","Data":"087819f431db46b81166e897b21b99194aa2c81f307651c133879685fcb5a03d"} Feb 19 10:05:26 crc kubenswrapper[4873]: I0219 10:05:26.275097 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ljn4d" event={"ID":"355c3bd2-5fb4-4a28-be15-e766b61eeed9","Type":"ContainerStarted","Data":"f92a5b55c4afad8b7be3982a8f532ea10a49853d42efb23114e4c171eee5c339"} Feb 19 10:05:26 crc kubenswrapper[4873]: I0219 10:05:26.277558 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:05:26 crc kubenswrapper[4873]: I0219 10:05:26.299620 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ljn4d" podStartSLOduration=2.299601247 podStartE2EDuration="2.299601247s" podCreationTimestamp="2026-02-19 10:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:26.295303501 +0000 UTC m=+1235.584735139" watchObservedRunningTime="2026-02-19 10:05:26.299601247 +0000 UTC m=+1235.589032895" Feb 19 10:05:26 crc kubenswrapper[4873]: I0219 10:05:26.321233 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.509697875 podStartE2EDuration="5.321210833s" podCreationTimestamp="2026-02-19 10:05:21 +0000 UTC" firstStartedPulling="2026-02-19 10:05:22.061004541 +0000 UTC m=+1231.350436179" lastFinishedPulling="2026-02-19 10:05:24.872517499 +0000 UTC m=+1234.161949137" observedRunningTime="2026-02-19 10:05:26.313472661 +0000 UTC m=+1235.602904299" watchObservedRunningTime="2026-02-19 10:05:26.321210833 +0000 UTC m=+1235.610642471" Feb 19 10:05:27 crc kubenswrapper[4873]: I0219 10:05:27.497750 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561650f5-0705-4bab-903d-66bba11301ce" path="/var/lib/kubelet/pods/561650f5-0705-4bab-903d-66bba11301ce/volumes" Feb 19 10:05:29 crc kubenswrapper[4873]: I0219 10:05:29.545452 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:29 crc kubenswrapper[4873]: I0219 10:05:29.545770 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:30 crc kubenswrapper[4873]: I0219 10:05:30.556245 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:30 crc kubenswrapper[4873]: I0219 10:05:30.556291 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:31 crc kubenswrapper[4873]: I0219 10:05:31.325119 4873 generic.go:334] "Generic (PLEG): container finished" podID="355c3bd2-5fb4-4a28-be15-e766b61eeed9" containerID="087819f431db46b81166e897b21b99194aa2c81f307651c133879685fcb5a03d" exitCode=0 Feb 19 10:05:31 crc kubenswrapper[4873]: I0219 10:05:31.325200 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ljn4d" event={"ID":"355c3bd2-5fb4-4a28-be15-e766b61eeed9","Type":"ContainerDied","Data":"087819f431db46b81166e897b21b99194aa2c81f307651c133879685fcb5a03d"} Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.787355 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.913958 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts\") pod \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.914050 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fnt9\" (UniqueName: \"kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9\") pod \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.914078 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data\") pod \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.914167 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle\") pod \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\" (UID: \"355c3bd2-5fb4-4a28-be15-e766b61eeed9\") " Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.919728 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9" (OuterVolumeSpecName: "kube-api-access-2fnt9") pod "355c3bd2-5fb4-4a28-be15-e766b61eeed9" (UID: "355c3bd2-5fb4-4a28-be15-e766b61eeed9"). InnerVolumeSpecName "kube-api-access-2fnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.919830 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts" (OuterVolumeSpecName: "scripts") pod "355c3bd2-5fb4-4a28-be15-e766b61eeed9" (UID: "355c3bd2-5fb4-4a28-be15-e766b61eeed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.949332 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "355c3bd2-5fb4-4a28-be15-e766b61eeed9" (UID: "355c3bd2-5fb4-4a28-be15-e766b61eeed9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:32 crc kubenswrapper[4873]: I0219 10:05:32.953711 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data" (OuterVolumeSpecName: "config-data") pod "355c3bd2-5fb4-4a28-be15-e766b61eeed9" (UID: "355c3bd2-5fb4-4a28-be15-e766b61eeed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.016800 4873 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.016864 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fnt9\" (UniqueName: \"kubernetes.io/projected/355c3bd2-5fb4-4a28-be15-e766b61eeed9-kube-api-access-2fnt9\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.016880 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.016892 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355c3bd2-5fb4-4a28-be15-e766b61eeed9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.347663 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ljn4d" event={"ID":"355c3bd2-5fb4-4a28-be15-e766b61eeed9","Type":"ContainerDied","Data":"f92a5b55c4afad8b7be3982a8f532ea10a49853d42efb23114e4c171eee5c339"} Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.347715 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92a5b55c4afad8b7be3982a8f532ea10a49853d42efb23114e4c171eee5c339" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.347732 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ljn4d" Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.519373 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.519655 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-log" containerID="cri-o://a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74" gracePeriod=30 Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.519750 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-api" containerID="cri-o://ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb" gracePeriod=30 Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.544700 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.544981 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-log" containerID="cri-o://109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609" gracePeriod=30 Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.545051 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-metadata" containerID="cri-o://772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c" gracePeriod=30 Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.558614 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:33 crc kubenswrapper[4873]: I0219 10:05:33.558898 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" containerName="nova-scheduler-scheduler" containerID="cri-o://0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c" gracePeriod=30 Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.356527 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerID="109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609" exitCode=143 Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.356590 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerDied","Data":"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609"} Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.358548 4873 generic.go:334] "Generic (PLEG): container finished" podID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerID="a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74" exitCode=143 Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.358579 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerDied","Data":"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74"} Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.986730 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4873]: I0219 10:05:34.995277 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.070887 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.070961 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttk6m\" (UniqueName: \"kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m\") pod \"ab0c5b09-1134-4319-890d-8d42e916fc4c\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.071035 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.071058 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjk8d\" (UniqueName: \"kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.071116 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.071178 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs\") pod \"ab0c5b09-1134-4319-890d-8d42e916fc4c\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.071541 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs" (OuterVolumeSpecName: "logs") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072047 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle\") pod \"ab0c5b09-1134-4319-890d-8d42e916fc4c\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072076 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072127 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs\") pod \"ab0c5b09-1134-4319-890d-8d42e916fc4c\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072191 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle\") pod \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\" (UID: \"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072242 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data\") pod \"ab0c5b09-1134-4319-890d-8d42e916fc4c\" (UID: \"ab0c5b09-1134-4319-890d-8d42e916fc4c\") " Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.072728 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.073817 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs" (OuterVolumeSpecName: "logs") pod "ab0c5b09-1134-4319-890d-8d42e916fc4c" (UID: "ab0c5b09-1134-4319-890d-8d42e916fc4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.078208 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d" (OuterVolumeSpecName: "kube-api-access-qjk8d") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "kube-api-access-qjk8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.095462 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m" (OuterVolumeSpecName: "kube-api-access-ttk6m") pod "ab0c5b09-1134-4319-890d-8d42e916fc4c" (UID: "ab0c5b09-1134-4319-890d-8d42e916fc4c"). InnerVolumeSpecName "kube-api-access-ttk6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.106249 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data" (OuterVolumeSpecName: "config-data") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.150871 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab0c5b09-1134-4319-890d-8d42e916fc4c" (UID: "ab0c5b09-1134-4319-890d-8d42e916fc4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.166378 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.166778 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data" (OuterVolumeSpecName: "config-data") pod "ab0c5b09-1134-4319-890d-8d42e916fc4c" (UID: "ab0c5b09-1134-4319-890d-8d42e916fc4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174529 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174569 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttk6m\" (UniqueName: \"kubernetes.io/projected/ab0c5b09-1134-4319-890d-8d42e916fc4c-kube-api-access-ttk6m\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174584 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174595 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjk8d\" (UniqueName: \"kubernetes.io/projected/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-kube-api-access-qjk8d\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174607 4873 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174617 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.174628 4873 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab0c5b09-1134-4319-890d-8d42e916fc4c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.194791 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ab0c5b09-1134-4319-890d-8d42e916fc4c" (UID: "ab0c5b09-1134-4319-890d-8d42e916fc4c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.205256 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.207736 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" (UID: "772434a9-08d5-499e-9ea1-e9ed0cc1e1b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.276190 4873 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab0c5b09-1134-4319-890d-8d42e916fc4c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.276229 4873 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.276243 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.368872 4873 generic.go:334] "Generic (PLEG): container finished" podID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerID="ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb" exitCode=0 Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.368947 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerDied","Data":"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb"} Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.368977 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"772434a9-08d5-499e-9ea1-e9ed0cc1e1b6","Type":"ContainerDied","Data":"11443c61c72c773d1b9e2f4dbdc5cf11081029e2fa850f7683c10c10c8471883"} Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.368993 4873 scope.go:117] "RemoveContainer" containerID="ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.368994 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.370818 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.370779 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerID="772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c" exitCode=0 Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.370865 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerDied","Data":"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c"} Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.370898 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab0c5b09-1134-4319-890d-8d42e916fc4c","Type":"ContainerDied","Data":"bccf19aed991c39d7abd6ccba3455f58690ec6cd1d9a81513e1e6040784b81a5"} Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.392735 4873 scope.go:117] "RemoveContainer" containerID="a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.427872 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.435675 4873 scope.go:117] "RemoveContainer" containerID="ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.436171 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb\": container with ID starting with ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb not found: ID does not exist" containerID="ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.436235 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb"} err="failed to get container status \"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb\": rpc error: code = NotFound desc = could not find container \"ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb\": container with ID starting with ee64d4a6004bb57017d578c308d94024de77698e585848d30e9375ee1fa2c2cb not found: ID does not exist" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.436273 4873 scope.go:117] "RemoveContainer" containerID="a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.436578 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74\": container with ID starting with a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74 not found: ID does not exist" containerID="a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.436618 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74"} err="failed to get container status \"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74\": rpc error: code = NotFound desc = could not find container \"a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74\": container with ID starting with a591f50f852c32017b43b07ceb566ed62ac4d82f0a926ca84d57c3237e89bc74 not found: ID does not exist" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.436647 4873 scope.go:117] "RemoveContainer" containerID="772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.447917 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.459622 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.470880 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.483588 4873 scope.go:117] "RemoveContainer" containerID="109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.496047 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" path="/var/lib/kubelet/pods/772434a9-08d5-499e-9ea1-e9ed0cc1e1b6/volumes" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.497942 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" path="/var/lib/kubelet/pods/ab0c5b09-1134-4319-890d-8d42e916fc4c/volumes" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.514578 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515228 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-log" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515246 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-log" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515271 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-api" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515277 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-api" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515311 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="init" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515318 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="init" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515325 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="dnsmasq-dns" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515331 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="dnsmasq-dns" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515345 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-log" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515353 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-log" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515366 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-metadata" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515372 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-metadata" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.515394 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355c3bd2-5fb4-4a28-be15-e766b61eeed9" containerName="nova-manage" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515400 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="355c3bd2-5fb4-4a28-be15-e766b61eeed9" containerName="nova-manage" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515590 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="561650f5-0705-4bab-903d-66bba11301ce" containerName="dnsmasq-dns" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515601 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-api" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515615 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="772434a9-08d5-499e-9ea1-e9ed0cc1e1b6" containerName="nova-api-log" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.515629 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="355c3bd2-5fb4-4a28-be15-e766b61eeed9" containerName="nova-manage" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.516259 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-metadata" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.516277 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab0c5b09-1134-4319-890d-8d42e916fc4c" containerName="nova-metadata-log" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.517611 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.519599 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.519933 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.525748 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.527454 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.530408 4873 scope.go:117] "RemoveContainer" containerID="772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.531932 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c\": container with ID starting with 772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c not found: ID does not exist" containerID="772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.531992 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c"} err="failed to get container status \"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c\": rpc error: code = NotFound desc = could not find container \"772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c\": container with ID starting with 772f80f78694557364cb6aacede90edcdbc43950df0974bec13a72c5fca43e1c not found: ID does not exist" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.532021 4873 scope.go:117] "RemoveContainer" containerID="109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609" Feb 19 10:05:35 crc kubenswrapper[4873]: E0219 10:05:35.532549 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609\": container with ID starting with 109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609 not found: ID does not exist" containerID="109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.532593 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609"} err="failed to get container status \"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609\": rpc error: code = NotFound desc = could not find container \"109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609\": container with ID starting with 109894b19d76dd1c9ccf380213b2386f244585a42e16fabe157c7ca98cdcf609 not found: ID does not exist" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.536035 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.542575 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.542953 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.543162 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.549726 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582356 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-config-data\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582426 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4e613e-0a31-4191-9afb-4fd0300586f9-logs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582491 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582659 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582729 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-logs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582756 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-config-data\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582852 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66vr\" (UniqueName: \"kubernetes.io/projected/4f4e613e-0a31-4191-9afb-4fd0300586f9-kube-api-access-z66vr\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582906 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582927 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckp52\" (UniqueName: \"kubernetes.io/projected/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-kube-api-access-ckp52\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.582957 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684618 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684721 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-logs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684761 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-config-data\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684871 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z66vr\" (UniqueName: \"kubernetes.io/projected/4f4e613e-0a31-4191-9afb-4fd0300586f9-kube-api-access-z66vr\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684919 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684948 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckp52\" (UniqueName: \"kubernetes.io/projected/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-kube-api-access-ckp52\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.684986 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.685017 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.685086 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-config-data\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.685209 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4e613e-0a31-4191-9afb-4fd0300586f9-logs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.685247 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.686530 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-logs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.687589 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f4e613e-0a31-4191-9afb-4fd0300586f9-logs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.691303 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-config-data\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.691313 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.691396 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.691635 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.692534 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.699977 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f4e613e-0a31-4191-9afb-4fd0300586f9-config-data\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.701948 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.703436 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckp52\" (UniqueName: \"kubernetes.io/projected/15cbab3c-9843-4bf6-b0e8-b65dec1e5112-kube-api-access-ckp52\") pod \"nova-metadata-0\" (UID: \"15cbab3c-9843-4bf6-b0e8-b65dec1e5112\") " pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.707268 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z66vr\" (UniqueName: \"kubernetes.io/projected/4f4e613e-0a31-4191-9afb-4fd0300586f9-kube-api-access-z66vr\") pod \"nova-api-0\" (UID: \"4f4e613e-0a31-4191-9afb-4fd0300586f9\") " pod="openstack/nova-api-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.843945 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:35 crc kubenswrapper[4873]: I0219 10:05:35.857949 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:36 crc kubenswrapper[4873]: I0219 10:05:36.297667 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:36 crc kubenswrapper[4873]: W0219 10:05:36.298201 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4e613e_0a31_4191_9afb_4fd0300586f9.slice/crio-43de6e5637d1b8678ecde0ab5263ea883f419f771bacbc5ddf5e6a1395de3e37 WatchSource:0}: Error finding container 43de6e5637d1b8678ecde0ab5263ea883f419f771bacbc5ddf5e6a1395de3e37: Status 404 returned error can't find the container with id 43de6e5637d1b8678ecde0ab5263ea883f419f771bacbc5ddf5e6a1395de3e37 Feb 19 10:05:36 crc kubenswrapper[4873]: W0219 10:05:36.375924 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15cbab3c_9843_4bf6_b0e8_b65dec1e5112.slice/crio-bafa115a07f4a93bc6f7ece8d23b0c1c97209d3481343763924ca339e6de1621 WatchSource:0}: Error finding container bafa115a07f4a93bc6f7ece8d23b0c1c97209d3481343763924ca339e6de1621: Status 404 returned error can't find the container with id bafa115a07f4a93bc6f7ece8d23b0c1c97209d3481343763924ca339e6de1621 Feb 19 10:05:36 crc kubenswrapper[4873]: I0219 10:05:36.377771 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:36 crc kubenswrapper[4873]: I0219 10:05:36.382622 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f4e613e-0a31-4191-9afb-4fd0300586f9","Type":"ContainerStarted","Data":"43de6e5637d1b8678ecde0ab5263ea883f419f771bacbc5ddf5e6a1395de3e37"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.340148 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.395674 4873 generic.go:334] "Generic (PLEG): container finished" podID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" containerID="0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c" exitCode=0 Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.395727 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.395788 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8db35141-6a4c-41cb-8a70-c68ab32fb2fe","Type":"ContainerDied","Data":"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.395903 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8db35141-6a4c-41cb-8a70-c68ab32fb2fe","Type":"ContainerDied","Data":"f055548cf56af3386b5caa98b53e876cf48bc03e71f26f63cd2f36d9a6f05688"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.395946 4873 scope.go:117] "RemoveContainer" containerID="0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.399427 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f4e613e-0a31-4191-9afb-4fd0300586f9","Type":"ContainerStarted","Data":"9d66b0406dd8f3c71680ca8ff73a8bc9b08c01bf307f2c8c00ef8d085181f558"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.399461 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f4e613e-0a31-4191-9afb-4fd0300586f9","Type":"ContainerStarted","Data":"c2cf9eb54249a68ed8392ecaa69612f7ada56c52548f065a93329538658ab765"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.403352 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15cbab3c-9843-4bf6-b0e8-b65dec1e5112","Type":"ContainerStarted","Data":"4d76dad0283fc179b6ca7ee703579cd31efa8b206fd7cd2dbde38f28d03e9d19"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.403382 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15cbab3c-9843-4bf6-b0e8-b65dec1e5112","Type":"ContainerStarted","Data":"7cfe4de0a0212b2f5ffbcac217726fed9cb19ad2e9e4b5106643a951701d1708"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.403392 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15cbab3c-9843-4bf6-b0e8-b65dec1e5112","Type":"ContainerStarted","Data":"bafa115a07f4a93bc6f7ece8d23b0c1c97209d3481343763924ca339e6de1621"} Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.424534 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4nvb\" (UniqueName: \"kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb\") pod \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.424814 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle\") pod \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.424889 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data\") pod \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\" (UID: \"8db35141-6a4c-41cb-8a70-c68ab32fb2fe\") " Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.432662 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb" (OuterVolumeSpecName: "kube-api-access-m4nvb") pod "8db35141-6a4c-41cb-8a70-c68ab32fb2fe" (UID: "8db35141-6a4c-41cb-8a70-c68ab32fb2fe"). InnerVolumeSpecName "kube-api-access-m4nvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.436560 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.436544056 podStartE2EDuration="2.436544056s" podCreationTimestamp="2026-02-19 10:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:37.419183006 +0000 UTC m=+1246.708614654" watchObservedRunningTime="2026-02-19 10:05:37.436544056 +0000 UTC m=+1246.725975694" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.440942 4873 scope.go:117] "RemoveContainer" containerID="0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c" Feb 19 10:05:37 crc kubenswrapper[4873]: E0219 10:05:37.441571 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c\": container with ID starting with 0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c not found: ID does not exist" containerID="0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.441699 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c"} err="failed to get container status \"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c\": rpc error: code = NotFound desc = could not find container \"0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c\": container with ID starting with 0076bd1c46cbf74257579e26ec45274c90ac1a1b4889da41cc03fa8662a3134c not found: ID does not exist" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.460984 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.460959032 podStartE2EDuration="2.460959032s" podCreationTimestamp="2026-02-19 10:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:37.450142553 +0000 UTC m=+1246.739574201" watchObservedRunningTime="2026-02-19 10:05:37.460959032 +0000 UTC m=+1246.750390670" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.464366 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data" (OuterVolumeSpecName: "config-data") pod "8db35141-6a4c-41cb-8a70-c68ab32fb2fe" (UID: "8db35141-6a4c-41cb-8a70-c68ab32fb2fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.465415 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8db35141-6a4c-41cb-8a70-c68ab32fb2fe" (UID: "8db35141-6a4c-41cb-8a70-c68ab32fb2fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.528226 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.528376 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.528540 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4nvb\" (UniqueName: \"kubernetes.io/projected/8db35141-6a4c-41cb-8a70-c68ab32fb2fe-kube-api-access-m4nvb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.719210 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.730681 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.750414 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:37 crc kubenswrapper[4873]: E0219 10:05:37.751081 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" containerName="nova-scheduler-scheduler" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.751139 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" containerName="nova-scheduler-scheduler" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.751464 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" containerName="nova-scheduler-scheduler" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.752514 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.756234 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.763257 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.834252 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h79t\" (UniqueName: \"kubernetes.io/projected/adb0395e-00f8-4bc6-a0a6-2b956235c58c-kube-api-access-9h79t\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.834338 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-config-data\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.834445 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.936571 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-config-data\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.936951 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.937163 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h79t\" (UniqueName: \"kubernetes.io/projected/adb0395e-00f8-4bc6-a0a6-2b956235c58c-kube-api-access-9h79t\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.940162 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-config-data\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.942463 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb0395e-00f8-4bc6-a0a6-2b956235c58c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:37 crc kubenswrapper[4873]: I0219 10:05:37.953984 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h79t\" (UniqueName: \"kubernetes.io/projected/adb0395e-00f8-4bc6-a0a6-2b956235c58c-kube-api-access-9h79t\") pod \"nova-scheduler-0\" (UID: \"adb0395e-00f8-4bc6-a0a6-2b956235c58c\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:38 crc kubenswrapper[4873]: I0219 10:05:38.070592 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:38 crc kubenswrapper[4873]: I0219 10:05:38.501754 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:38 crc kubenswrapper[4873]: W0219 10:05:38.503789 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb0395e_00f8_4bc6_a0a6_2b956235c58c.slice/crio-80d2b1106b66a9e1508060bab5ff72cc07015decc7183687b312f61552fc028b WatchSource:0}: Error finding container 80d2b1106b66a9e1508060bab5ff72cc07015decc7183687b312f61552fc028b: Status 404 returned error can't find the container with id 80d2b1106b66a9e1508060bab5ff72cc07015decc7183687b312f61552fc028b Feb 19 10:05:39 crc kubenswrapper[4873]: I0219 10:05:39.425206 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb0395e-00f8-4bc6-a0a6-2b956235c58c","Type":"ContainerStarted","Data":"024963f8cbb55ebe7c4bbd80648c5d800f6606f9b002b8ae290401573319520a"} Feb 19 10:05:39 crc kubenswrapper[4873]: I0219 10:05:39.425543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb0395e-00f8-4bc6-a0a6-2b956235c58c","Type":"ContainerStarted","Data":"80d2b1106b66a9e1508060bab5ff72cc07015decc7183687b312f61552fc028b"} Feb 19 10:05:39 crc kubenswrapper[4873]: I0219 10:05:39.448783 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.448767521 podStartE2EDuration="2.448767521s" podCreationTimestamp="2026-02-19 10:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:39.447019368 +0000 UTC m=+1248.736451006" watchObservedRunningTime="2026-02-19 10:05:39.448767521 +0000 UTC m=+1248.738199159" Feb 19 10:05:39 crc kubenswrapper[4873]: I0219 10:05:39.494989 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db35141-6a4c-41cb-8a70-c68ab32fb2fe" path="/var/lib/kubelet/pods/8db35141-6a4c-41cb-8a70-c68ab32fb2fe/volumes" Feb 19 10:05:40 crc kubenswrapper[4873]: I0219 10:05:40.845284 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:40 crc kubenswrapper[4873]: I0219 10:05:40.846077 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:43 crc kubenswrapper[4873]: I0219 10:05:43.071746 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4873]: I0219 10:05:45.959032 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:46 crc kubenswrapper[4873]: I0219 10:05:46.020822 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:46 crc kubenswrapper[4873]: I0219 10:05:46.021142 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:46 crc kubenswrapper[4873]: I0219 10:05:46.021256 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:47 crc kubenswrapper[4873]: I0219 10:05:47.020491 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="15cbab3c-9843-4bf6-b0e8-b65dec1e5112" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:47 crc kubenswrapper[4873]: I0219 10:05:47.020587 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="15cbab3c-9843-4bf6-b0e8-b65dec1e5112" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:47 crc kubenswrapper[4873]: I0219 10:05:47.021037 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f4e613e-0a31-4191-9afb-4fd0300586f9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:47 crc kubenswrapper[4873]: I0219 10:05:47.021872 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f4e613e-0a31-4191-9afb-4fd0300586f9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:48 crc kubenswrapper[4873]: I0219 10:05:48.071437 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:05:48 crc kubenswrapper[4873]: I0219 10:05:48.101221 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:05:49 crc kubenswrapper[4873]: I0219 10:05:49.035656 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:05:51 crc kubenswrapper[4873]: I0219 10:05:51.623669 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.850458 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.856741 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.866952 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.895643 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.896015 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.900008 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4873]: I0219 10:05:55.912548 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:05:56 crc kubenswrapper[4873]: I0219 10:05:56.076440 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:56 crc kubenswrapper[4873]: I0219 10:05:56.081273 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:56 crc kubenswrapper[4873]: I0219 10:05:56.087626 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4873]: I0219 10:06:03.900740 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:05 crc kubenswrapper[4873]: I0219 10:06:05.021617 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:07 crc kubenswrapper[4873]: I0219 10:06:07.310472 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="rabbitmq" containerID="cri-o://bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9" gracePeriod=604797 Feb 19 10:06:08 crc kubenswrapper[4873]: I0219 10:06:08.364069 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="rabbitmq" containerID="cri-o://aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72" gracePeriod=604797 Feb 19 10:06:08 crc kubenswrapper[4873]: I0219 10:06:08.989735 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.142453 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjp76\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.142561 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.142585 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143325 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143363 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143404 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143486 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143547 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143590 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143617 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.143715 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret\") pod \"9251ac9a-275e-4622-83a2-121d59ec8cd1\" (UID: \"9251ac9a-275e-4622-83a2-121d59ec8cd1\") " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.145150 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.147370 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.150832 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.152114 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.152350 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info" (OuterVolumeSpecName: "pod-info") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.154970 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.155692 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76" (OuterVolumeSpecName: "kube-api-access-vjp76") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "kube-api-access-vjp76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.159156 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.178238 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data" (OuterVolumeSpecName: "config-data") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.220652 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf" (OuterVolumeSpecName: "server-conf") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.224397 4873 generic.go:334] "Generic (PLEG): container finished" podID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerID="bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9" exitCode=0 Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.224470 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerDied","Data":"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9"} Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.224504 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9251ac9a-275e-4622-83a2-121d59ec8cd1","Type":"ContainerDied","Data":"7661fe6352a716a9db14456953448866e2c9797ab10f540b398fdf6a05d1c0b7"} Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.224525 4873 scope.go:117] "RemoveContainer" containerID="bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.224703 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247214 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247548 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247564 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247577 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247589 4873 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9251ac9a-275e-4622-83a2-121d59ec8cd1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247599 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjp76\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-kube-api-access-vjp76\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247625 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247638 4873 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9251ac9a-275e-4622-83a2-121d59ec8cd1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247660 4873 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.247672 4873 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9251ac9a-275e-4622-83a2-121d59ec8cd1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.271240 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.271482 4873 scope.go:117] "RemoveContainer" containerID="190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.318730 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9251ac9a-275e-4622-83a2-121d59ec8cd1" (UID: "9251ac9a-275e-4622-83a2-121d59ec8cd1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.333967 4873 scope.go:117] "RemoveContainer" containerID="bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9" Feb 19 10:06:09 crc kubenswrapper[4873]: E0219 10:06:09.334351 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9\": container with ID starting with bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9 not found: ID does not exist" containerID="bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.334380 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9"} err="failed to get container status \"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9\": rpc error: code = NotFound desc = could not find container \"bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9\": container with ID starting with bf1da2993e733619c7b1bf83f5278da19d4e84782b865399c0e41f4b70eb6bd9 not found: ID does not exist" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.334402 4873 scope.go:117] "RemoveContainer" containerID="190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f" Feb 19 10:06:09 crc kubenswrapper[4873]: E0219 10:06:09.334925 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f\": container with ID starting with 190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f not found: ID does not exist" containerID="190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.334960 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f"} err="failed to get container status \"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f\": rpc error: code = NotFound desc = could not find container \"190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f\": container with ID starting with 190198ff0e2017378a91068584666652381227259351cc680fcbb3817b8e453f not found: ID does not exist" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.349131 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.349159 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9251ac9a-275e-4622-83a2-121d59ec8cd1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.546048 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.559886 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.584887 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:09 crc kubenswrapper[4873]: E0219 10:06:09.585364 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="rabbitmq" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.585380 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="rabbitmq" Feb 19 10:06:09 crc kubenswrapper[4873]: E0219 10:06:09.585398 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="setup-container" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.585405 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="setup-container" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.585592 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" containerName="rabbitmq" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.586999 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.594579 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.594789 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fnhrw" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.594891 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.595008 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.595114 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.595751 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.595859 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.629996 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756413 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rq2\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-kube-api-access-z7rq2\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756466 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d564a6d4-4702-4e96-b814-8d9f01db02e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756578 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756744 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.756932 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.757127 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.757195 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.757225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.757258 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d564a6d4-4702-4e96-b814-8d9f01db02e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.757378 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859497 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859561 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859613 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859635 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859653 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859671 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d564a6d4-4702-4e96-b814-8d9f01db02e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859704 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859751 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rq2\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-kube-api-access-z7rq2\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859775 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d564a6d4-4702-4e96-b814-8d9f01db02e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859794 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.859813 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.861295 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.861619 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.862178 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.862675 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d564a6d4-4702-4e96-b814-8d9f01db02e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.867076 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.867040 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.868707 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.870696 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d564a6d4-4702-4e96-b814-8d9f01db02e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.870810 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d564a6d4-4702-4e96-b814-8d9f01db02e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.870855 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.884092 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rq2\" (UniqueName: \"kubernetes.io/projected/d564a6d4-4702-4e96-b814-8d9f01db02e5-kube-api-access-z7rq2\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:09 crc kubenswrapper[4873]: I0219 10:06:09.947408 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d564a6d4-4702-4e96-b814-8d9f01db02e5\") " pod="openstack/rabbitmq-server-0" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.031494 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.060674 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165300 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165384 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165462 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165557 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165608 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165653 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.165849 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.166053 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.166201 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.166248 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.166400 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info\") pod \"86685946-19ac-434a-974f-99b5beeda172\" (UID: \"86685946-19ac-434a-974f-99b5beeda172\") " Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.167869 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.184260 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info" (OuterVolumeSpecName: "pod-info") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.185929 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.191966 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n" (OuterVolumeSpecName: "kube-api-access-m275n") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "kube-api-access-m275n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.192206 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.194917 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.195250 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.212082 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.212338 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data" (OuterVolumeSpecName: "config-data") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.249271 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf" (OuterVolumeSpecName: "server-conf") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.253549 4873 generic.go:334] "Generic (PLEG): container finished" podID="86685946-19ac-434a-974f-99b5beeda172" containerID="aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72" exitCode=0 Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.253594 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerDied","Data":"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72"} Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.253646 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"86685946-19ac-434a-974f-99b5beeda172","Type":"ContainerDied","Data":"ef991a861997941a147c9b5a0da440f69f41ed8b1c1a849520b30accb3784df6"} Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.253667 4873 scope.go:117] "RemoveContainer" containerID="aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72" Feb 19 10:06:10 crc kubenswrapper[4873]: I0219 10:06:10.253893 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.270889 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271184 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271194 4873 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86685946-19ac-434a-974f-99b5beeda172-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271203 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271213 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271251 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271260 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-kube-api-access-m275n\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271268 4873 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271276 4873 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86685946-19ac-434a-974f-99b5beeda172-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.271284 4873 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86685946-19ac-434a-974f-99b5beeda172-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.311573 4873 scope.go:117] "RemoveContainer" containerID="853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.320878 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.366638 4873 scope.go:117] "RemoveContainer" containerID="aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72" Feb 19 10:06:11 crc kubenswrapper[4873]: E0219 10:06:10.367150 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72\": container with ID starting with aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72 not found: ID does not exist" containerID="aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.367179 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72"} err="failed to get container status \"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72\": rpc error: code = NotFound desc = could not find container \"aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72\": container with ID starting with aaf5e7116cb1bb3c6e4b474f08773f59b49977479dfd0072b42ee20f8a60cf72 not found: ID does not exist" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.367204 4873 scope.go:117] "RemoveContainer" containerID="853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb" Feb 19 10:06:11 crc kubenswrapper[4873]: E0219 10:06:10.369683 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb\": container with ID starting with 853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb not found: ID does not exist" containerID="853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.369722 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb"} err="failed to get container status \"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb\": rpc error: code = NotFound desc = could not find container \"853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb\": container with ID starting with 853752f484df74ddd70c3a27d9c1c59cdeac53c948f829a75d4f8ed34050d1fb not found: ID does not exist" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.373094 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.383351 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86685946-19ac-434a-974f-99b5beeda172" (UID: "86685946-19ac-434a-974f-99b5beeda172"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.475234 4873 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86685946-19ac-434a-974f-99b5beeda172-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.630572 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.873238 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.891264 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.900443 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:11 crc kubenswrapper[4873]: E0219 10:06:10.900899 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="setup-container" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.900912 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="setup-container" Feb 19 10:06:11 crc kubenswrapper[4873]: E0219 10:06:10.900950 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="rabbitmq" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.900956 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="rabbitmq" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.901167 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="86685946-19ac-434a-974f-99b5beeda172" containerName="rabbitmq" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.902233 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.904281 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.904424 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.904511 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.904531 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.905316 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6k7rl" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.906889 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.910448 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.913814 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990627 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990735 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990762 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990787 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990809 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1150426f-909f-4b05-b216-ccf29f7039eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990872 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.990933 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.991012 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrb7\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-kube-api-access-lrrb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.991132 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1150426f-909f-4b05-b216-ccf29f7039eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.991201 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:10.991252 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092718 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092811 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092849 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrb7\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-kube-api-access-lrrb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092897 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1150426f-909f-4b05-b216-ccf29f7039eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092926 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.092956 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093028 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093062 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093077 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093094 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093122 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1150426f-909f-4b05-b216-ccf29f7039eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093475 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.093856 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.094365 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.095006 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.095298 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.099258 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1150426f-909f-4b05-b216-ccf29f7039eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.104494 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1150426f-909f-4b05-b216-ccf29f7039eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.112226 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.113690 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1150426f-909f-4b05-b216-ccf29f7039eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.114771 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.122763 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrb7\" (UniqueName: \"kubernetes.io/projected/1150426f-909f-4b05-b216-ccf29f7039eb-kube-api-access-lrrb7\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.142204 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1150426f-909f-4b05-b216-ccf29f7039eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.222532 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.313289 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d564a6d4-4702-4e96-b814-8d9f01db02e5","Type":"ContainerStarted","Data":"c7cc2db082581af74d278481e43014ad33c557364ed01b95368b3d16b032cd52"} Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.501587 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86685946-19ac-434a-974f-99b5beeda172" path="/var/lib/kubelet/pods/86685946-19ac-434a-974f-99b5beeda172/volumes" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.503998 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9251ac9a-275e-4622-83a2-121d59ec8cd1" path="/var/lib/kubelet/pods/9251ac9a-275e-4622-83a2-121d59ec8cd1/volumes" Feb 19 10:06:11 crc kubenswrapper[4873]: I0219 10:06:11.777281 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:12 crc kubenswrapper[4873]: I0219 10:06:12.324896 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d564a6d4-4702-4e96-b814-8d9f01db02e5","Type":"ContainerStarted","Data":"eb081e9d62d248fe118cb009fbd9c708c3a1079ddddb512798daffee0aad2659"} Feb 19 10:06:12 crc kubenswrapper[4873]: I0219 10:06:12.329114 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1150426f-909f-4b05-b216-ccf29f7039eb","Type":"ContainerStarted","Data":"b5136359797a48bb7031d97e15dfa7e628dac2b662ce623e5198317766d7b417"} Feb 19 10:06:14 crc kubenswrapper[4873]: I0219 10:06:14.349457 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1150426f-909f-4b05-b216-ccf29f7039eb","Type":"ContainerStarted","Data":"e92393b6741f6fef922d6f42af0b98f135a455cf95571e6601d11aec865d3f88"} Feb 19 10:06:18 crc kubenswrapper[4873]: I0219 10:06:18.240251 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:06:18 crc kubenswrapper[4873]: I0219 10:06:18.240804 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.317704 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.324231 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.331405 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.355780 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359272 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359330 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359358 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359374 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359401 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359424 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.359448 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rw8h\" (UniqueName: \"kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461294 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461334 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461356 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461396 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.461475 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rw8h\" (UniqueName: \"kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.462951 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.463129 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.463656 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.463855 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.464376 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.465860 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.492827 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rw8h\" (UniqueName: \"kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h\") pod \"dnsmasq-dns-6fdbdb9c55-dppcj\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:19 crc kubenswrapper[4873]: I0219 10:06:19.657870 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:20 crc kubenswrapper[4873]: I0219 10:06:20.118481 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:20 crc kubenswrapper[4873]: I0219 10:06:20.414718 4873 generic.go:334] "Generic (PLEG): container finished" podID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerID="c5f22a31813d866744a5f77fa474ee1d76800c4836a597abacab19d63e9b80d9" exitCode=0 Feb 19 10:06:20 crc kubenswrapper[4873]: I0219 10:06:20.414794 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" event={"ID":"988a3bc7-bb05-4522-a7ae-7c3be4478924","Type":"ContainerDied","Data":"c5f22a31813d866744a5f77fa474ee1d76800c4836a597abacab19d63e9b80d9"} Feb 19 10:06:20 crc kubenswrapper[4873]: I0219 10:06:20.415055 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" event={"ID":"988a3bc7-bb05-4522-a7ae-7c3be4478924","Type":"ContainerStarted","Data":"6aa2dd4165f3de832bd64c7d6658e46d1dc0230ed8bd63d75af9c25f8da8717b"} Feb 19 10:06:21 crc kubenswrapper[4873]: I0219 10:06:21.426717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" event={"ID":"988a3bc7-bb05-4522-a7ae-7c3be4478924","Type":"ContainerStarted","Data":"72bdbd03b364c7b44bc9c7201f20a54c64ed8479aa1ac42baf6e1fee556e7bbc"} Feb 19 10:06:21 crc kubenswrapper[4873]: I0219 10:06:21.426991 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:21 crc kubenswrapper[4873]: I0219 10:06:21.450603 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" podStartSLOduration=2.450586434 podStartE2EDuration="2.450586434s" podCreationTimestamp="2026-02-19 10:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:21.44235877 +0000 UTC m=+1290.731790408" watchObservedRunningTime="2026-02-19 10:06:21.450586434 +0000 UTC m=+1290.740018072" Feb 19 10:06:29 crc kubenswrapper[4873]: I0219 10:06:29.660350 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:29 crc kubenswrapper[4873]: I0219 10:06:29.758165 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:06:29 crc kubenswrapper[4873]: I0219 10:06:29.758418 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="dnsmasq-dns" containerID="cri-o://2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e" gracePeriod=10 Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.000927 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c564b89cf-9v87f"] Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.004264 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.018379 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c564b89cf-9v87f"] Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.090409 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fbv\" (UniqueName: \"kubernetes.io/projected/20253d93-eafe-45db-b11e-338714ffd978-kube-api-access-r4fbv\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.090823 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-config\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.090884 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-sb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.090968 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.091116 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-nb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.091270 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-swift-storage-0\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.091531 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-svc\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194487 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fbv\" (UniqueName: \"kubernetes.io/projected/20253d93-eafe-45db-b11e-338714ffd978-kube-api-access-r4fbv\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194578 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-config\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194617 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-sb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194676 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194717 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-nb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194773 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-swift-storage-0\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.194817 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-svc\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.195854 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-svc\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.196497 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.197199 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-nb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.197704 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-config\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.197849 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-ovsdbserver-sb\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.198441 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20253d93-eafe-45db-b11e-338714ffd978-dns-swift-storage-0\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.226296 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fbv\" (UniqueName: \"kubernetes.io/projected/20253d93-eafe-45db-b11e-338714ffd978-kube-api-access-r4fbv\") pod \"dnsmasq-dns-6c564b89cf-9v87f\" (UID: \"20253d93-eafe-45db-b11e-338714ffd978\") " pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.350056 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.351514 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398133 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398209 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398324 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398396 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398720 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx9vs\" (UniqueName: \"kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.398895 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config\") pod \"7fe74544-e8af-45bd-9193-2b247c5e002b\" (UID: \"7fe74544-e8af-45bd-9193-2b247c5e002b\") " Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.408368 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs" (OuterVolumeSpecName: "kube-api-access-zx9vs") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "kube-api-access-zx9vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.456318 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.464331 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.471496 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config" (OuterVolumeSpecName: "config") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.526679 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx9vs\" (UniqueName: \"kubernetes.io/projected/7fe74544-e8af-45bd-9193-2b247c5e002b-kube-api-access-zx9vs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.527182 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.527199 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.527214 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.543038 4873 generic.go:334] "Generic (PLEG): container finished" podID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerID="2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e" exitCode=0 Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.543144 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" event={"ID":"7fe74544-e8af-45bd-9193-2b247c5e002b","Type":"ContainerDied","Data":"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e"} Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.543174 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" event={"ID":"7fe74544-e8af-45bd-9193-2b247c5e002b","Type":"ContainerDied","Data":"bc3e27b7897877cb015837d9764a384c7709da0f91ae1099d9eccef91911ed99"} Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.543220 4873 scope.go:117] "RemoveContainer" containerID="2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.543392 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d65dbfc-jjvbb" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.574941 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.580243 4873 scope.go:117] "RemoveContainer" containerID="3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.580639 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7fe74544-e8af-45bd-9193-2b247c5e002b" (UID: "7fe74544-e8af-45bd-9193-2b247c5e002b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.616276 4873 scope.go:117] "RemoveContainer" containerID="2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e" Feb 19 10:06:30 crc kubenswrapper[4873]: E0219 10:06:30.616654 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e\": container with ID starting with 2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e not found: ID does not exist" containerID="2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.616694 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e"} err="failed to get container status \"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e\": rpc error: code = NotFound desc = could not find container \"2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e\": container with ID starting with 2ea5c5513c19c4058c8a4129fb3b5a3547db474091e874784f52fc36d3b3d60e not found: ID does not exist" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.616721 4873 scope.go:117] "RemoveContainer" containerID="3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d" Feb 19 10:06:30 crc kubenswrapper[4873]: E0219 10:06:30.616992 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d\": container with ID starting with 3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d not found: ID does not exist" containerID="3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.617021 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d"} err="failed to get container status \"3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d\": rpc error: code = NotFound desc = could not find container \"3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d\": container with ID starting with 3584138ebb0ff2c764fd4906203bf8dafbda732c59288ffd05b66db47c5cbd6d not found: ID does not exist" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.628732 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.628792 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe74544-e8af-45bd-9193-2b247c5e002b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.886243 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.895916 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d65dbfc-jjvbb"] Feb 19 10:06:30 crc kubenswrapper[4873]: I0219 10:06:30.906330 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c564b89cf-9v87f"] Feb 19 10:06:31 crc kubenswrapper[4873]: I0219 10:06:31.498176 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" path="/var/lib/kubelet/pods/7fe74544-e8af-45bd-9193-2b247c5e002b/volumes" Feb 19 10:06:31 crc kubenswrapper[4873]: I0219 10:06:31.558317 4873 generic.go:334] "Generic (PLEG): container finished" podID="20253d93-eafe-45db-b11e-338714ffd978" containerID="7b6fb2011e0fd962e7437b181f0d3894901e01de64f7de57469e1175d0fa86fe" exitCode=0 Feb 19 10:06:31 crc kubenswrapper[4873]: I0219 10:06:31.558366 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" event={"ID":"20253d93-eafe-45db-b11e-338714ffd978","Type":"ContainerDied","Data":"7b6fb2011e0fd962e7437b181f0d3894901e01de64f7de57469e1175d0fa86fe"} Feb 19 10:06:31 crc kubenswrapper[4873]: I0219 10:06:31.558393 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" event={"ID":"20253d93-eafe-45db-b11e-338714ffd978","Type":"ContainerStarted","Data":"fb7ad8e83fa657cd7fe3472370e61269633a748f3b0a1baf36e81604a562de5e"} Feb 19 10:06:32 crc kubenswrapper[4873]: I0219 10:06:32.569382 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" event={"ID":"20253d93-eafe-45db-b11e-338714ffd978","Type":"ContainerStarted","Data":"eddf2c14b9e6e7d59df006669936aba8a73592bd75124ff204c065af7b3f552e"} Feb 19 10:06:32 crc kubenswrapper[4873]: I0219 10:06:32.569811 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:32 crc kubenswrapper[4873]: I0219 10:06:32.595916 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" podStartSLOduration=3.5958963710000003 podStartE2EDuration="3.595896371s" podCreationTimestamp="2026-02-19 10:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:32.591787859 +0000 UTC m=+1301.881219497" watchObservedRunningTime="2026-02-19 10:06:32.595896371 +0000 UTC m=+1301.885328009" Feb 19 10:06:40 crc kubenswrapper[4873]: I0219 10:06:40.357308 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c564b89cf-9v87f" Feb 19 10:06:40 crc kubenswrapper[4873]: I0219 10:06:40.430531 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:40 crc kubenswrapper[4873]: I0219 10:06:40.430891 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="dnsmasq-dns" containerID="cri-o://72bdbd03b364c7b44bc9c7201f20a54c64ed8479aa1ac42baf6e1fee556e7bbc" gracePeriod=10 Feb 19 10:06:40 crc kubenswrapper[4873]: I0219 10:06:40.708448 4873 generic.go:334] "Generic (PLEG): container finished" podID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerID="72bdbd03b364c7b44bc9c7201f20a54c64ed8479aa1ac42baf6e1fee556e7bbc" exitCode=0 Feb 19 10:06:40 crc kubenswrapper[4873]: I0219 10:06:40.708494 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" event={"ID":"988a3bc7-bb05-4522-a7ae-7c3be4478924","Type":"ContainerDied","Data":"72bdbd03b364c7b44bc9c7201f20a54c64ed8479aa1ac42baf6e1fee556e7bbc"} Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.042045 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.177692 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rw8h\" (UniqueName: \"kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.177771 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.177872 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.177932 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.178655 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.178784 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.178809 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0\") pod \"988a3bc7-bb05-4522-a7ae-7c3be4478924\" (UID: \"988a3bc7-bb05-4522-a7ae-7c3be4478924\") " Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.183412 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h" (OuterVolumeSpecName: "kube-api-access-8rw8h") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "kube-api-access-8rw8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.243507 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.245085 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.248000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.248400 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config" (OuterVolumeSpecName: "config") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.251932 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.255008 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "988a3bc7-bb05-4522-a7ae-7c3be4478924" (UID: "988a3bc7-bb05-4522-a7ae-7c3be4478924"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281263 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281302 4873 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281316 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rw8h\" (UniqueName: \"kubernetes.io/projected/988a3bc7-bb05-4522-a7ae-7c3be4478924-kube-api-access-8rw8h\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281327 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281340 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281350 4873 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.281361 4873 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/988a3bc7-bb05-4522-a7ae-7c3be4478924-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.720534 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" event={"ID":"988a3bc7-bb05-4522-a7ae-7c3be4478924","Type":"ContainerDied","Data":"6aa2dd4165f3de832bd64c7d6658e46d1dc0230ed8bd63d75af9c25f8da8717b"} Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.720588 4873 scope.go:117] "RemoveContainer" containerID="72bdbd03b364c7b44bc9c7201f20a54c64ed8479aa1ac42baf6e1fee556e7bbc" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.720889 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdbdb9c55-dppcj" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.747902 4873 scope.go:117] "RemoveContainer" containerID="c5f22a31813d866744a5f77fa474ee1d76800c4836a597abacab19d63e9b80d9" Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.761621 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:41 crc kubenswrapper[4873]: I0219 10:06:41.771571 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fdbdb9c55-dppcj"] Feb 19 10:06:43 crc kubenswrapper[4873]: I0219 10:06:43.496937 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" path="/var/lib/kubelet/pods/988a3bc7-bb05-4522-a7ae-7c3be4478924/volumes" Feb 19 10:06:44 crc kubenswrapper[4873]: I0219 10:06:44.752623 4873 generic.go:334] "Generic (PLEG): container finished" podID="d564a6d4-4702-4e96-b814-8d9f01db02e5" containerID="eb081e9d62d248fe118cb009fbd9c708c3a1079ddddb512798daffee0aad2659" exitCode=0 Feb 19 10:06:44 crc kubenswrapper[4873]: I0219 10:06:44.752664 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d564a6d4-4702-4e96-b814-8d9f01db02e5","Type":"ContainerDied","Data":"eb081e9d62d248fe118cb009fbd9c708c3a1079ddddb512798daffee0aad2659"} Feb 19 10:06:46 crc kubenswrapper[4873]: I0219 10:06:46.775398 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d564a6d4-4702-4e96-b814-8d9f01db02e5","Type":"ContainerStarted","Data":"a59923a07efdebf36fc246826c0fc72de70f7236debbfc2afcfce4a1705f0602"} Feb 19 10:06:46 crc kubenswrapper[4873]: I0219 10:06:46.776134 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:06:46 crc kubenswrapper[4873]: I0219 10:06:46.777343 4873 generic.go:334] "Generic (PLEG): container finished" podID="1150426f-909f-4b05-b216-ccf29f7039eb" containerID="e92393b6741f6fef922d6f42af0b98f135a455cf95571e6601d11aec865d3f88" exitCode=0 Feb 19 10:06:46 crc kubenswrapper[4873]: I0219 10:06:46.777364 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1150426f-909f-4b05-b216-ccf29f7039eb","Type":"ContainerDied","Data":"e92393b6741f6fef922d6f42af0b98f135a455cf95571e6601d11aec865d3f88"} Feb 19 10:06:46 crc kubenswrapper[4873]: I0219 10:06:46.841446 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.841415854 podStartE2EDuration="37.841415854s" podCreationTimestamp="2026-02-19 10:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:46.811476702 +0000 UTC m=+1316.100908340" watchObservedRunningTime="2026-02-19 10:06:46.841415854 +0000 UTC m=+1316.130847502" Feb 19 10:06:47 crc kubenswrapper[4873]: I0219 10:06:47.889432 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1150426f-909f-4b05-b216-ccf29f7039eb","Type":"ContainerStarted","Data":"e7a020b0278c9620d00ad390cbdc9faab9ea346dca5ae496025db4c612b262de"} Feb 19 10:06:47 crc kubenswrapper[4873]: I0219 10:06:47.890659 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:06:48 crc kubenswrapper[4873]: I0219 10:06:48.241024 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:06:48 crc kubenswrapper[4873]: I0219 10:06:48.241081 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.215632 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.215612173 podStartE2EDuration="44.215612173s" podCreationTimestamp="2026-02-19 10:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:47.969890509 +0000 UTC m=+1317.259322147" watchObservedRunningTime="2026-02-19 10:06:54.215612173 +0000 UTC m=+1323.505043811" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.225908 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn"] Feb 19 10:06:54 crc kubenswrapper[4873]: E0219 10:06:54.226584 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.226618 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: E0219 10:06:54.226645 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="init" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.226657 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="init" Feb 19 10:06:54 crc kubenswrapper[4873]: E0219 10:06:54.226697 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="init" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.226712 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="init" Feb 19 10:06:54 crc kubenswrapper[4873]: E0219 10:06:54.226740 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.226753 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.227082 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe74544-e8af-45bd-9193-2b247c5e002b" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.227201 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="988a3bc7-bb05-4522-a7ae-7c3be4478924" containerName="dnsmasq-dns" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.228251 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.230385 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.230670 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.231011 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.232232 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.236711 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn"] Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.319869 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.320060 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrsj\" (UniqueName: \"kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.320137 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.320190 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.421885 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.422086 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.422287 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrsj\" (UniqueName: \"kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.422322 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.429231 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.429966 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.432938 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.445083 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrsj\" (UniqueName: \"kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:54 crc kubenswrapper[4873]: I0219 10:06:54.554008 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:06:55 crc kubenswrapper[4873]: I0219 10:06:55.165291 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn"] Feb 19 10:06:55 crc kubenswrapper[4873]: I0219 10:06:55.978380 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" event={"ID":"fda37ba3-82f5-4d49-a15f-4dca53649ec7","Type":"ContainerStarted","Data":"2537d37d9435dc0666f8a0e5eee660828300fdb6865c995e2e3548737cc40d44"} Feb 19 10:07:00 crc kubenswrapper[4873]: I0219 10:07:00.064338 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:07:01 crc kubenswrapper[4873]: I0219 10:07:01.228309 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:07 crc kubenswrapper[4873]: I0219 10:07:07.111772 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" event={"ID":"fda37ba3-82f5-4d49-a15f-4dca53649ec7","Type":"ContainerStarted","Data":"93fa48ed56c25accad984cd4a8e384efd75d4a5f89fc72d2bd295f628f6bf22f"} Feb 19 10:07:08 crc kubenswrapper[4873]: I0219 10:07:08.146861 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" podStartSLOduration=2.502466896 podStartE2EDuration="14.146840794s" podCreationTimestamp="2026-02-19 10:06:54 +0000 UTC" firstStartedPulling="2026-02-19 10:06:55.165149033 +0000 UTC m=+1324.454580671" lastFinishedPulling="2026-02-19 10:07:06.809522931 +0000 UTC m=+1336.098954569" observedRunningTime="2026-02-19 10:07:08.136175 +0000 UTC m=+1337.425606668" watchObservedRunningTime="2026-02-19 10:07:08.146840794 +0000 UTC m=+1337.436272442" Feb 19 10:07:18 crc kubenswrapper[4873]: I0219 10:07:18.240817 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:07:18 crc kubenswrapper[4873]: I0219 10:07:18.241367 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:07:18 crc kubenswrapper[4873]: I0219 10:07:18.241414 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:07:18 crc kubenswrapper[4873]: I0219 10:07:18.241984 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:07:18 crc kubenswrapper[4873]: I0219 10:07:18.242055 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad" gracePeriod=600 Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.233742 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad" exitCode=0 Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.233812 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad"} Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.234165 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790"} Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.234188 4873 scope.go:117] "RemoveContainer" containerID="4cf449f514dc24e840144e6f6decb8f1a064252cdbd9c34d791686fe659362f0" Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.235892 4873 generic.go:334] "Generic (PLEG): container finished" podID="fda37ba3-82f5-4d49-a15f-4dca53649ec7" containerID="93fa48ed56c25accad984cd4a8e384efd75d4a5f89fc72d2bd295f628f6bf22f" exitCode=0 Feb 19 10:07:19 crc kubenswrapper[4873]: I0219 10:07:19.235924 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" event={"ID":"fda37ba3-82f5-4d49-a15f-4dca53649ec7","Type":"ContainerDied","Data":"93fa48ed56c25accad984cd4a8e384efd75d4a5f89fc72d2bd295f628f6bf22f"} Feb 19 10:07:20 crc kubenswrapper[4873]: I0219 10:07:20.893703 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:07:20 crc kubenswrapper[4873]: I0219 10:07:20.997918 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrsj\" (UniqueName: \"kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj\") pod \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " Feb 19 10:07:20 crc kubenswrapper[4873]: I0219 10:07:20.998083 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam\") pod \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " Feb 19 10:07:20 crc kubenswrapper[4873]: I0219 10:07:20.998282 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle\") pod \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " Feb 19 10:07:20 crc kubenswrapper[4873]: I0219 10:07:20.998354 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory\") pod \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\" (UID: \"fda37ba3-82f5-4d49-a15f-4dca53649ec7\") " Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.005256 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fda37ba3-82f5-4d49-a15f-4dca53649ec7" (UID: "fda37ba3-82f5-4d49-a15f-4dca53649ec7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.012324 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj" (OuterVolumeSpecName: "kube-api-access-dgrsj") pod "fda37ba3-82f5-4d49-a15f-4dca53649ec7" (UID: "fda37ba3-82f5-4d49-a15f-4dca53649ec7"). InnerVolumeSpecName "kube-api-access-dgrsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.027461 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fda37ba3-82f5-4d49-a15f-4dca53649ec7" (UID: "fda37ba3-82f5-4d49-a15f-4dca53649ec7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.032703 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory" (OuterVolumeSpecName: "inventory") pod "fda37ba3-82f5-4d49-a15f-4dca53649ec7" (UID: "fda37ba3-82f5-4d49-a15f-4dca53649ec7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.101056 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.101129 4873 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.101143 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fda37ba3-82f5-4d49-a15f-4dca53649ec7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.101155 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgrsj\" (UniqueName: \"kubernetes.io/projected/fda37ba3-82f5-4d49-a15f-4dca53649ec7-kube-api-access-dgrsj\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.263479 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" event={"ID":"fda37ba3-82f5-4d49-a15f-4dca53649ec7","Type":"ContainerDied","Data":"2537d37d9435dc0666f8a0e5eee660828300fdb6865c995e2e3548737cc40d44"} Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.263727 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2537d37d9435dc0666f8a0e5eee660828300fdb6865c995e2e3548737cc40d44" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.263788 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.434592 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6"] Feb 19 10:07:21 crc kubenswrapper[4873]: E0219 10:07:21.435032 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda37ba3-82f5-4d49-a15f-4dca53649ec7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.435050 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda37ba3-82f5-4d49-a15f-4dca53649ec7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.435257 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda37ba3-82f5-4d49-a15f-4dca53649ec7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.436892 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.439746 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.439985 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.440147 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.448208 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6"] Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.451848 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.508906 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxqw\" (UniqueName: \"kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.508997 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.509084 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.610301 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxqw\" (UniqueName: \"kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.610430 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.610515 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.617907 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.620632 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.627440 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxqw\" (UniqueName: \"kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mt2n6\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:21 crc kubenswrapper[4873]: I0219 10:07:21.768589 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:22 crc kubenswrapper[4873]: I0219 10:07:22.458582 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6"] Feb 19 10:07:22 crc kubenswrapper[4873]: W0219 10:07:22.502925 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ba1c3b5_6b1a_4d7e_bbdd_fb492abd6647.slice/crio-a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883 WatchSource:0}: Error finding container a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883: Status 404 returned error can't find the container with id a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883 Feb 19 10:07:23 crc kubenswrapper[4873]: I0219 10:07:23.283541 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" event={"ID":"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647","Type":"ContainerStarted","Data":"a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883"} Feb 19 10:07:24 crc kubenswrapper[4873]: I0219 10:07:24.294424 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" event={"ID":"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647","Type":"ContainerStarted","Data":"9abf7155230d56a289acb3e84c2905166e6cff2383d1c2040f888bc83038da2d"} Feb 19 10:07:24 crc kubenswrapper[4873]: I0219 10:07:24.321939 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" podStartSLOduration=2.692315584 podStartE2EDuration="3.321923002s" podCreationTimestamp="2026-02-19 10:07:21 +0000 UTC" firstStartedPulling="2026-02-19 10:07:22.506118157 +0000 UTC m=+1351.795549795" lastFinishedPulling="2026-02-19 10:07:23.135725575 +0000 UTC m=+1352.425157213" observedRunningTime="2026-02-19 10:07:24.310593331 +0000 UTC m=+1353.600024969" watchObservedRunningTime="2026-02-19 10:07:24.321923002 +0000 UTC m=+1353.611354640" Feb 19 10:07:26 crc kubenswrapper[4873]: I0219 10:07:26.314581 4873 generic.go:334] "Generic (PLEG): container finished" podID="3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" containerID="9abf7155230d56a289acb3e84c2905166e6cff2383d1c2040f888bc83038da2d" exitCode=0 Feb 19 10:07:26 crc kubenswrapper[4873]: I0219 10:07:26.314656 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" event={"ID":"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647","Type":"ContainerDied","Data":"9abf7155230d56a289acb3e84c2905166e6cff2383d1c2040f888bc83038da2d"} Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.770600 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.873359 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmxqw\" (UniqueName: \"kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw\") pod \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.873416 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory\") pod \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.873449 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam\") pod \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\" (UID: \"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647\") " Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.880471 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw" (OuterVolumeSpecName: "kube-api-access-pmxqw") pod "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" (UID: "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647"). InnerVolumeSpecName "kube-api-access-pmxqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.908324 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" (UID: "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.908882 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory" (OuterVolumeSpecName: "inventory") pod "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" (UID: "3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.975663 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmxqw\" (UniqueName: \"kubernetes.io/projected/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-kube-api-access-pmxqw\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.975706 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:27 crc kubenswrapper[4873]: I0219 10:07:27.975720 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.339630 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" event={"ID":"3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647","Type":"ContainerDied","Data":"a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883"} Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.339871 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66a2ac00db4b0ce1cf689d6373930dcff701c9b030b9ffb68e1f54a4a051883" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.339696 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mt2n6" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.417464 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r"] Feb 19 10:07:28 crc kubenswrapper[4873]: E0219 10:07:28.417859 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.417874 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.418049 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.418826 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.421261 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.421423 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.421900 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.432946 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.437929 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r"] Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.587253 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknhk\" (UniqueName: \"kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.587567 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.588432 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.588880 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.690903 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknhk\" (UniqueName: \"kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.691293 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.691416 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.691478 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.695366 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.696191 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.700316 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.706497 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknhk\" (UniqueName: \"kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:28 crc kubenswrapper[4873]: I0219 10:07:28.737022 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:07:29 crc kubenswrapper[4873]: I0219 10:07:29.297373 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r"] Feb 19 10:07:29 crc kubenswrapper[4873]: I0219 10:07:29.353591 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" event={"ID":"fb8aa6eb-a92d-47ab-803f-664399242dde","Type":"ContainerStarted","Data":"e61fa5adb3a25729e4ecf38db740518b7a6ba9fcd4130132a83dd97938b102a1"} Feb 19 10:07:30 crc kubenswrapper[4873]: I0219 10:07:30.366280 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" event={"ID":"fb8aa6eb-a92d-47ab-803f-664399242dde","Type":"ContainerStarted","Data":"5a3eb0b6bd6c7101e7f4001df9bf0b1c21607b64acd82737d7c08daff696a875"} Feb 19 10:07:30 crc kubenswrapper[4873]: I0219 10:07:30.390853 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" podStartSLOduration=1.98012219 podStartE2EDuration="2.390834472s" podCreationTimestamp="2026-02-19 10:07:28 +0000 UTC" firstStartedPulling="2026-02-19 10:07:29.30486108 +0000 UTC m=+1358.594292718" lastFinishedPulling="2026-02-19 10:07:29.715573372 +0000 UTC m=+1359.005005000" observedRunningTime="2026-02-19 10:07:30.381630634 +0000 UTC m=+1359.671062282" watchObservedRunningTime="2026-02-19 10:07:30.390834472 +0000 UTC m=+1359.680266110" Feb 19 10:07:55 crc kubenswrapper[4873]: I0219 10:07:55.899064 4873 scope.go:117] "RemoveContainer" containerID="2f0ffc7ea2219fb39042b2ae636be2bc871ede3a5af5f5056178cf8abfebcb4d" Feb 19 10:07:55 crc kubenswrapper[4873]: I0219 10:07:55.930573 4873 scope.go:117] "RemoveContainer" containerID="300c17fe87cdc74fea5cc1a915ff92db53e3c3a4eee6ced7352b06833035dffb" Feb 19 10:08:56 crc kubenswrapper[4873]: I0219 10:08:56.039560 4873 scope.go:117] "RemoveContainer" containerID="dfe8a7cf5aeabc3bd0899d011af3258a91fc6d795682d82ec9f1f9c569448452" Feb 19 10:08:56 crc kubenswrapper[4873]: I0219 10:08:56.105766 4873 scope.go:117] "RemoveContainer" containerID="1a1b6f4ba694daddb17f029a0bbce06c79e8294e69f096dade9d91ac98c03f81" Feb 19 10:08:56 crc kubenswrapper[4873]: I0219 10:08:56.145140 4873 scope.go:117] "RemoveContainer" containerID="f728a5cace0f3c84844ee9bd7c5a0c48b5b5cad808dd5c682427cb942eb77db6" Feb 19 10:09:18 crc kubenswrapper[4873]: I0219 10:09:18.240982 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:09:18 crc kubenswrapper[4873]: I0219 10:09:18.241623 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:09:48 crc kubenswrapper[4873]: I0219 10:09:48.240387 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:09:48 crc kubenswrapper[4873]: I0219 10:09:48.240871 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:10:18 crc kubenswrapper[4873]: I0219 10:10:18.241082 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:10:18 crc kubenswrapper[4873]: I0219 10:10:18.242010 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:10:18 crc kubenswrapper[4873]: I0219 10:10:18.242083 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:10:18 crc kubenswrapper[4873]: I0219 10:10:18.243501 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:10:18 crc kubenswrapper[4873]: I0219 10:10:18.243573 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" gracePeriod=600 Feb 19 10:10:18 crc kubenswrapper[4873]: E0219 10:10:18.367226 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:10:19 crc kubenswrapper[4873]: I0219 10:10:19.074948 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" exitCode=0 Feb 19 10:10:19 crc kubenswrapper[4873]: I0219 10:10:19.075004 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790"} Feb 19 10:10:19 crc kubenswrapper[4873]: I0219 10:10:19.075047 4873 scope.go:117] "RemoveContainer" containerID="fe114037dbb1e5c10911ab253f48b67258ca8f08b33d891b20892a3cde8544ad" Feb 19 10:10:19 crc kubenswrapper[4873]: I0219 10:10:19.075613 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:10:19 crc kubenswrapper[4873]: E0219 10:10:19.076010 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:10:33 crc kubenswrapper[4873]: I0219 10:10:33.232775 4873 generic.go:334] "Generic (PLEG): container finished" podID="fb8aa6eb-a92d-47ab-803f-664399242dde" containerID="5a3eb0b6bd6c7101e7f4001df9bf0b1c21607b64acd82737d7c08daff696a875" exitCode=0 Feb 19 10:10:33 crc kubenswrapper[4873]: I0219 10:10:33.233339 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" event={"ID":"fb8aa6eb-a92d-47ab-803f-664399242dde","Type":"ContainerDied","Data":"5a3eb0b6bd6c7101e7f4001df9bf0b1c21607b64acd82737d7c08daff696a875"} Feb 19 10:10:33 crc kubenswrapper[4873]: I0219 10:10:33.485051 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:10:33 crc kubenswrapper[4873]: E0219 10:10:33.485415 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.738410 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.758379 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory\") pod \"fb8aa6eb-a92d-47ab-803f-664399242dde\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.758594 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nknhk\" (UniqueName: \"kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk\") pod \"fb8aa6eb-a92d-47ab-803f-664399242dde\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.758653 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam\") pod \"fb8aa6eb-a92d-47ab-803f-664399242dde\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.758783 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle\") pod \"fb8aa6eb-a92d-47ab-803f-664399242dde\" (UID: \"fb8aa6eb-a92d-47ab-803f-664399242dde\") " Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.766880 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fb8aa6eb-a92d-47ab-803f-664399242dde" (UID: "fb8aa6eb-a92d-47ab-803f-664399242dde"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.767064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk" (OuterVolumeSpecName: "kube-api-access-nknhk") pod "fb8aa6eb-a92d-47ab-803f-664399242dde" (UID: "fb8aa6eb-a92d-47ab-803f-664399242dde"). InnerVolumeSpecName "kube-api-access-nknhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.808558 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory" (OuterVolumeSpecName: "inventory") pod "fb8aa6eb-a92d-47ab-803f-664399242dde" (UID: "fb8aa6eb-a92d-47ab-803f-664399242dde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.810595 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fb8aa6eb-a92d-47ab-803f-664399242dde" (UID: "fb8aa6eb-a92d-47ab-803f-664399242dde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.862328 4873 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.862380 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.862393 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nknhk\" (UniqueName: \"kubernetes.io/projected/fb8aa6eb-a92d-47ab-803f-664399242dde-kube-api-access-nknhk\") on node \"crc\" DevicePath \"\"" Feb 19 10:10:34 crc kubenswrapper[4873]: I0219 10:10:34.862404 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb8aa6eb-a92d-47ab-803f-664399242dde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.258921 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" event={"ID":"fb8aa6eb-a92d-47ab-803f-664399242dde","Type":"ContainerDied","Data":"e61fa5adb3a25729e4ecf38db740518b7a6ba9fcd4130132a83dd97938b102a1"} Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.258988 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61fa5adb3a25729e4ecf38db740518b7a6ba9fcd4130132a83dd97938b102a1" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.259036 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.344051 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj"] Feb 19 10:10:35 crc kubenswrapper[4873]: E0219 10:10:35.344667 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8aa6eb-a92d-47ab-803f-664399242dde" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.344695 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8aa6eb-a92d-47ab-803f-664399242dde" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.345005 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8aa6eb-a92d-47ab-803f-664399242dde" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.345960 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.348960 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.349955 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.350286 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.353609 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.361447 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj"] Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.377206 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.377317 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.377407 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8jp\" (UniqueName: \"kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.480520 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.480726 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.480851 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj8jp\" (UniqueName: \"kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.486349 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.487492 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.515588 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj8jp\" (UniqueName: \"kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:35 crc kubenswrapper[4873]: I0219 10:10:35.663887 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:10:36 crc kubenswrapper[4873]: I0219 10:10:36.272845 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:10:36 crc kubenswrapper[4873]: I0219 10:10:36.278599 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj"] Feb 19 10:10:37 crc kubenswrapper[4873]: I0219 10:10:37.283140 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" event={"ID":"ab7d5a49-ac61-4963-8766-1716098f3d4c","Type":"ContainerStarted","Data":"5a71dd5b4760261dd2be0e41411c70e6162e350c368db3bf6d1ef5a664d01e28"} Feb 19 10:10:37 crc kubenswrapper[4873]: I0219 10:10:37.283507 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" event={"ID":"ab7d5a49-ac61-4963-8766-1716098f3d4c","Type":"ContainerStarted","Data":"104a5f4bca47feaf4bdaf13262521e9dce1f7e82a92383ad952b55930e0a5622"} Feb 19 10:10:37 crc kubenswrapper[4873]: I0219 10:10:37.310587 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" podStartSLOduration=1.8451071959999998 podStartE2EDuration="2.310550032s" podCreationTimestamp="2026-02-19 10:10:35 +0000 UTC" firstStartedPulling="2026-02-19 10:10:36.272476873 +0000 UTC m=+1545.561908511" lastFinishedPulling="2026-02-19 10:10:36.737919709 +0000 UTC m=+1546.027351347" observedRunningTime="2026-02-19 10:10:37.303052574 +0000 UTC m=+1546.592484212" watchObservedRunningTime="2026-02-19 10:10:37.310550032 +0000 UTC m=+1546.599981670" Feb 19 10:10:45 crc kubenswrapper[4873]: I0219 10:10:45.484557 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:10:45 crc kubenswrapper[4873]: E0219 10:10:45.485386 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:10:58 crc kubenswrapper[4873]: I0219 10:10:58.484704 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:10:58 crc kubenswrapper[4873]: E0219 10:10:58.485450 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.087825 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.090209 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.107863 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.135387 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.135460 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.135520 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtgw\" (UniqueName: \"kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.236514 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtgw\" (UniqueName: \"kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.236788 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.236821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.237616 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.237673 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.262507 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtgw\" (UniqueName: \"kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw\") pod \"community-operators-dn7mz\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:00 crc kubenswrapper[4873]: I0219 10:11:00.434878 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:01 crc kubenswrapper[4873]: I0219 10:11:01.050778 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:01 crc kubenswrapper[4873]: I0219 10:11:01.509535 4873 generic.go:334] "Generic (PLEG): container finished" podID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerID="d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a" exitCode=0 Feb 19 10:11:01 crc kubenswrapper[4873]: I0219 10:11:01.509635 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerDied","Data":"d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a"} Feb 19 10:11:01 crc kubenswrapper[4873]: I0219 10:11:01.509880 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerStarted","Data":"6016678eb243c8a893a20317cd6ef0f121cfa445fec2561420da8503f1d94afe"} Feb 19 10:11:02 crc kubenswrapper[4873]: I0219 10:11:02.522122 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerStarted","Data":"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119"} Feb 19 10:11:03 crc kubenswrapper[4873]: I0219 10:11:03.532346 4873 generic.go:334] "Generic (PLEG): container finished" podID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerID="73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119" exitCode=0 Feb 19 10:11:03 crc kubenswrapper[4873]: I0219 10:11:03.532408 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerDied","Data":"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119"} Feb 19 10:11:04 crc kubenswrapper[4873]: I0219 10:11:04.546496 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerStarted","Data":"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480"} Feb 19 10:11:04 crc kubenswrapper[4873]: I0219 10:11:04.570157 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dn7mz" podStartSLOduration=2.089625965 podStartE2EDuration="4.570137303s" podCreationTimestamp="2026-02-19 10:11:00 +0000 UTC" firstStartedPulling="2026-02-19 10:11:01.511137255 +0000 UTC m=+1570.800568893" lastFinishedPulling="2026-02-19 10:11:03.991648593 +0000 UTC m=+1573.281080231" observedRunningTime="2026-02-19 10:11:04.56921074 +0000 UTC m=+1573.858642378" watchObservedRunningTime="2026-02-19 10:11:04.570137303 +0000 UTC m=+1573.859568931" Feb 19 10:11:10 crc kubenswrapper[4873]: I0219 10:11:10.435723 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:10 crc kubenswrapper[4873]: I0219 10:11:10.437431 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:10 crc kubenswrapper[4873]: I0219 10:11:10.481376 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:10 crc kubenswrapper[4873]: I0219 10:11:10.680531 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:10 crc kubenswrapper[4873]: I0219 10:11:10.733014 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:12 crc kubenswrapper[4873]: I0219 10:11:12.632893 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dn7mz" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="registry-server" containerID="cri-o://e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480" gracePeriod=2 Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.301252 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.422881 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtgw\" (UniqueName: \"kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw\") pod \"9454e008-863b-47f7-8b39-98d7b7a128cb\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.423182 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content\") pod \"9454e008-863b-47f7-8b39-98d7b7a128cb\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.423314 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities\") pod \"9454e008-863b-47f7-8b39-98d7b7a128cb\" (UID: \"9454e008-863b-47f7-8b39-98d7b7a128cb\") " Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.424613 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities" (OuterVolumeSpecName: "utilities") pod "9454e008-863b-47f7-8b39-98d7b7a128cb" (UID: "9454e008-863b-47f7-8b39-98d7b7a128cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.429632 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw" (OuterVolumeSpecName: "kube-api-access-nqtgw") pod "9454e008-863b-47f7-8b39-98d7b7a128cb" (UID: "9454e008-863b-47f7-8b39-98d7b7a128cb"). InnerVolumeSpecName "kube-api-access-nqtgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.479052 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9454e008-863b-47f7-8b39-98d7b7a128cb" (UID: "9454e008-863b-47f7-8b39-98d7b7a128cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.484284 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:11:13 crc kubenswrapper[4873]: E0219 10:11:13.484637 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.532047 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtgw\" (UniqueName: \"kubernetes.io/projected/9454e008-863b-47f7-8b39-98d7b7a128cb-kube-api-access-nqtgw\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.532286 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.532325 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9454e008-863b-47f7-8b39-98d7b7a128cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.644502 4873 generic.go:334] "Generic (PLEG): container finished" podID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerID="e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480" exitCode=0 Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.644593 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn7mz" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.644611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerDied","Data":"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480"} Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.645570 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn7mz" event={"ID":"9454e008-863b-47f7-8b39-98d7b7a128cb","Type":"ContainerDied","Data":"6016678eb243c8a893a20317cd6ef0f121cfa445fec2561420da8503f1d94afe"} Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.645615 4873 scope.go:117] "RemoveContainer" containerID="e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.668727 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.671977 4873 scope.go:117] "RemoveContainer" containerID="73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.680196 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dn7mz"] Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.701514 4873 scope.go:117] "RemoveContainer" containerID="d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.735984 4873 scope.go:117] "RemoveContainer" containerID="e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480" Feb 19 10:11:13 crc kubenswrapper[4873]: E0219 10:11:13.736445 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480\": container with ID starting with e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480 not found: ID does not exist" containerID="e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.736476 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480"} err="failed to get container status \"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480\": rpc error: code = NotFound desc = could not find container \"e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480\": container with ID starting with e52180c8384e35a2f5bf2a0b8136587b5856388dc38cf5f3c931307a85591480 not found: ID does not exist" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.736498 4873 scope.go:117] "RemoveContainer" containerID="73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119" Feb 19 10:11:13 crc kubenswrapper[4873]: E0219 10:11:13.736718 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119\": container with ID starting with 73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119 not found: ID does not exist" containerID="73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.736752 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119"} err="failed to get container status \"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119\": rpc error: code = NotFound desc = could not find container \"73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119\": container with ID starting with 73015bd366aa86c3d47a4a486cb2b9e5d213b08253942a273250f467cf812119 not found: ID does not exist" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.736771 4873 scope.go:117] "RemoveContainer" containerID="d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a" Feb 19 10:11:13 crc kubenswrapper[4873]: E0219 10:11:13.737057 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a\": container with ID starting with d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a not found: ID does not exist" containerID="d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a" Feb 19 10:11:13 crc kubenswrapper[4873]: I0219 10:11:13.737075 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a"} err="failed to get container status \"d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a\": rpc error: code = NotFound desc = could not find container \"d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a\": container with ID starting with d9588b3b223ddbc522f8374f58e421f3129c9acc0d3e1705f27d808d6e5c4c4a not found: ID does not exist" Feb 19 10:11:15 crc kubenswrapper[4873]: I0219 10:11:15.498119 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" path="/var/lib/kubelet/pods/9454e008-863b-47f7-8b39-98d7b7a128cb/volumes" Feb 19 10:11:27 crc kubenswrapper[4873]: I0219 10:11:27.484678 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:11:27 crc kubenswrapper[4873]: E0219 10:11:27.485552 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.049917 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p55tt"] Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.061363 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p55tt"] Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.073421 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c73a-account-create-update-zxxrn"] Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.082634 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c73a-account-create-update-zxxrn"] Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.145166 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:28 crc kubenswrapper[4873]: E0219 10:11:28.146145 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="extract-utilities" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.146162 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="extract-utilities" Feb 19 10:11:28 crc kubenswrapper[4873]: E0219 10:11:28.146200 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="registry-server" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.146209 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="registry-server" Feb 19 10:11:28 crc kubenswrapper[4873]: E0219 10:11:28.146245 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="extract-content" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.146253 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="extract-content" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.146499 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9454e008-863b-47f7-8b39-98d7b7a128cb" containerName="registry-server" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.148373 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.173053 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.232586 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4zjr\" (UniqueName: \"kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.232908 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.233487 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.335897 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.335999 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4zjr\" (UniqueName: \"kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.336498 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.336563 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.336945 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.365234 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4zjr\" (UniqueName: \"kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr\") pod \"redhat-marketplace-7nxdn\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.485386 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:28 crc kubenswrapper[4873]: I0219 10:11:28.969172 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:29 crc kubenswrapper[4873]: I0219 10:11:29.497603 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81f72af-8420-4334-811e-f0e0cc1c7731" path="/var/lib/kubelet/pods/d81f72af-8420-4334-811e-f0e0cc1c7731/volumes" Feb 19 10:11:29 crc kubenswrapper[4873]: I0219 10:11:29.498417 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88" path="/var/lib/kubelet/pods/fd0e3e74-f1aa-4b5f-a2ae-b89f90644f88/volumes" Feb 19 10:11:29 crc kubenswrapper[4873]: I0219 10:11:29.793020 4873 generic.go:334] "Generic (PLEG): container finished" podID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerID="d1ee2b33d1585962e5c2d8f8deb1f53ccd7bfb877b0017578ca6bff8f7dfd26e" exitCode=0 Feb 19 10:11:29 crc kubenswrapper[4873]: I0219 10:11:29.793076 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerDied","Data":"d1ee2b33d1585962e5c2d8f8deb1f53ccd7bfb877b0017578ca6bff8f7dfd26e"} Feb 19 10:11:29 crc kubenswrapper[4873]: I0219 10:11:29.793123 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerStarted","Data":"f6db4c0d2d18a7e3a75dd04979a22304b269c40be6cddc6d1fda3629593b15c0"} Feb 19 10:11:31 crc kubenswrapper[4873]: I0219 10:11:31.817410 4873 generic.go:334] "Generic (PLEG): container finished" podID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerID="9d6de769e9d17333501a00980ac56829127a539f53f95bb25cf420c5630db360" exitCode=0 Feb 19 10:11:31 crc kubenswrapper[4873]: I0219 10:11:31.817523 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerDied","Data":"9d6de769e9d17333501a00980ac56829127a539f53f95bb25cf420c5630db360"} Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.035070 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-46kds"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.045614 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-46kds"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.056781 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-r4fbt"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.070561 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e187-account-create-update-4xb7l"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.082142 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-r4fbt"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.094213 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f064-account-create-update-flh2f"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.105614 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e187-account-create-update-4xb7l"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.116361 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f064-account-create-update-flh2f"] Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.521125 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179cf76d-a15d-4bce-be42-18ad2e4abb94" path="/var/lib/kubelet/pods/179cf76d-a15d-4bce-be42-18ad2e4abb94/volumes" Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.806340 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6584bab0-12c6-4bce-99be-d38f3748f896" path="/var/lib/kubelet/pods/6584bab0-12c6-4bce-99be-d38f3748f896/volumes" Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.837227 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1770246-951b-40da-a0a2-4320dde71437" path="/var/lib/kubelet/pods/a1770246-951b-40da-a0a2-4320dde71437/volumes" Feb 19 10:11:33 crc kubenswrapper[4873]: I0219 10:11:33.844918 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af085fbb-9aaa-4d01-8a0f-a061acf3a845" path="/var/lib/kubelet/pods/af085fbb-9aaa-4d01-8a0f-a061acf3a845/volumes" Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.033568 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-087c-account-create-update-qnlsx"] Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.046318 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-dftzh"] Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.056660 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-dftzh"] Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.066633 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-087c-account-create-update-qnlsx"] Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.850058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerStarted","Data":"05b98d967cad1ee13631fa999a1fe6d672e3aca8c59af9321211e972aeea3daf"} Feb 19 10:11:34 crc kubenswrapper[4873]: I0219 10:11:34.868578 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7nxdn" podStartSLOduration=2.74214507 podStartE2EDuration="6.868559756s" podCreationTimestamp="2026-02-19 10:11:28 +0000 UTC" firstStartedPulling="2026-02-19 10:11:29.795895411 +0000 UTC m=+1599.085327049" lastFinishedPulling="2026-02-19 10:11:33.922310097 +0000 UTC m=+1603.211741735" observedRunningTime="2026-02-19 10:11:34.868059414 +0000 UTC m=+1604.157491062" watchObservedRunningTime="2026-02-19 10:11:34.868559756 +0000 UTC m=+1604.157991394" Feb 19 10:11:35 crc kubenswrapper[4873]: I0219 10:11:35.498420 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0cc2ef-89a2-4220-8b44-7fc71537ab50" path="/var/lib/kubelet/pods/7b0cc2ef-89a2-4220-8b44-7fc71537ab50/volumes" Feb 19 10:11:35 crc kubenswrapper[4873]: I0219 10:11:35.499948 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4" path="/var/lib/kubelet/pods/bd8b7b2e-f4a8-4af9-99aa-a1e8c3d78bd4/volumes" Feb 19 10:11:38 crc kubenswrapper[4873]: I0219 10:11:38.484328 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:11:38 crc kubenswrapper[4873]: E0219 10:11:38.484848 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:11:38 crc kubenswrapper[4873]: I0219 10:11:38.485658 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:38 crc kubenswrapper[4873]: I0219 10:11:38.485801 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:38 crc kubenswrapper[4873]: I0219 10:11:38.544248 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:39 crc kubenswrapper[4873]: I0219 10:11:39.968341 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:40 crc kubenswrapper[4873]: I0219 10:11:40.060726 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kvmj2"] Feb 19 10:11:40 crc kubenswrapper[4873]: I0219 10:11:40.073515 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kvmj2"] Feb 19 10:11:40 crc kubenswrapper[4873]: I0219 10:11:40.086320 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:41 crc kubenswrapper[4873]: I0219 10:11:41.516117 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a6680f-7e8e-4326-9401-fde957599477" path="/var/lib/kubelet/pods/29a6680f-7e8e-4326-9401-fde957599477/volumes" Feb 19 10:11:41 crc kubenswrapper[4873]: I0219 10:11:41.917413 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7nxdn" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="registry-server" containerID="cri-o://05b98d967cad1ee13631fa999a1fe6d672e3aca8c59af9321211e972aeea3daf" gracePeriod=2 Feb 19 10:11:42 crc kubenswrapper[4873]: I0219 10:11:42.930068 4873 generic.go:334] "Generic (PLEG): container finished" podID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerID="05b98d967cad1ee13631fa999a1fe6d672e3aca8c59af9321211e972aeea3daf" exitCode=0 Feb 19 10:11:42 crc kubenswrapper[4873]: I0219 10:11:42.930153 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerDied","Data":"05b98d967cad1ee13631fa999a1fe6d672e3aca8c59af9321211e972aeea3daf"} Feb 19 10:11:42 crc kubenswrapper[4873]: I0219 10:11:42.930448 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7nxdn" event={"ID":"64a48b6f-c0a6-4566-82a1-649e91bcd486","Type":"ContainerDied","Data":"f6db4c0d2d18a7e3a75dd04979a22304b269c40be6cddc6d1fda3629593b15c0"} Feb 19 10:11:42 crc kubenswrapper[4873]: I0219 10:11:42.930465 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6db4c0d2d18a7e3a75dd04979a22304b269c40be6cddc6d1fda3629593b15c0" Feb 19 10:11:42 crc kubenswrapper[4873]: I0219 10:11:42.972817 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.082492 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities\") pod \"64a48b6f-c0a6-4566-82a1-649e91bcd486\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.082717 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4zjr\" (UniqueName: \"kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr\") pod \"64a48b6f-c0a6-4566-82a1-649e91bcd486\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.082756 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content\") pod \"64a48b6f-c0a6-4566-82a1-649e91bcd486\" (UID: \"64a48b6f-c0a6-4566-82a1-649e91bcd486\") " Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.083315 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities" (OuterVolumeSpecName: "utilities") pod "64a48b6f-c0a6-4566-82a1-649e91bcd486" (UID: "64a48b6f-c0a6-4566-82a1-649e91bcd486"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.102240 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr" (OuterVolumeSpecName: "kube-api-access-d4zjr") pod "64a48b6f-c0a6-4566-82a1-649e91bcd486" (UID: "64a48b6f-c0a6-4566-82a1-649e91bcd486"). InnerVolumeSpecName "kube-api-access-d4zjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.114047 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64a48b6f-c0a6-4566-82a1-649e91bcd486" (UID: "64a48b6f-c0a6-4566-82a1-649e91bcd486"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.185485 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4zjr\" (UniqueName: \"kubernetes.io/projected/64a48b6f-c0a6-4566-82a1-649e91bcd486-kube-api-access-d4zjr\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.185541 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.185554 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a48b6f-c0a6-4566-82a1-649e91bcd486-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.938740 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7nxdn" Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.974461 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:43 crc kubenswrapper[4873]: I0219 10:11:43.984576 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7nxdn"] Feb 19 10:11:46 crc kubenswrapper[4873]: I0219 10:11:46.384079 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" path="/var/lib/kubelet/pods/64a48b6f-c0a6-4566-82a1-649e91bcd486/volumes" Feb 19 10:11:53 crc kubenswrapper[4873]: I0219 10:11:53.485428 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:11:53 crc kubenswrapper[4873]: E0219 10:11:53.486306 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.461319 4873 scope.go:117] "RemoveContainer" containerID="97c6c0035f5f6c9762dd68933f3909de6f99dfa1fe212cf2c55b0644dfffdb93" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.499936 4873 scope.go:117] "RemoveContainer" containerID="d63b34383441b0e539673e31cf4ea017f3d4fcdbd72ad26d47bf96c33fcf565d" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.561064 4873 scope.go:117] "RemoveContainer" containerID="7d6d1faa851ee46aca753c0c6509416782269dc982725ceddb2cd7f19fc16f13" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.640389 4873 scope.go:117] "RemoveContainer" containerID="6a370733fa679d2517624889ae788a6c37c512bf2894dbe6a54f6e24bdad6056" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.682259 4873 scope.go:117] "RemoveContainer" containerID="5b7d076de566fc4d5772c9116560231f2acaebd0aad62281f0cb88965b142cc2" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.724426 4873 scope.go:117] "RemoveContainer" containerID="f0d3f3ad8d69a092fbacd08190bbe079ce8644eec25f2003bbee9cc3d511dd9c" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.779911 4873 scope.go:117] "RemoveContainer" containerID="3a7ee324cc97736a2be2ff10cda880e991b9ebce5c06108335c9156379f7a8ea" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.816006 4873 scope.go:117] "RemoveContainer" containerID="7cdae30c7bd3b7746068a98e9abd29f90395d96b144f2365d0ebb1da465756e0" Feb 19 10:11:56 crc kubenswrapper[4873]: I0219 10:11:56.842582 4873 scope.go:117] "RemoveContainer" containerID="2be1eaacedf333b387e3ffd6dce5223b73f9487c48808cd68df4b60a3f55fd39" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.637401 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:01 crc kubenswrapper[4873]: E0219 10:12:01.638809 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="extract-content" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.638827 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="extract-content" Feb 19 10:12:01 crc kubenswrapper[4873]: E0219 10:12:01.638842 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="extract-utilities" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.638849 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="extract-utilities" Feb 19 10:12:01 crc kubenswrapper[4873]: E0219 10:12:01.638865 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="registry-server" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.638872 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="registry-server" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.639127 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a48b6f-c0a6-4566-82a1-649e91bcd486" containerName="registry-server" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.641138 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.671294 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.800388 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.800562 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.800681 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g88c8\" (UniqueName: \"kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.902620 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g88c8\" (UniqueName: \"kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.902694 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.902810 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.903375 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.903410 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.926675 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g88c8\" (UniqueName: \"kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8\") pod \"redhat-operators-z6dkm\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:01 crc kubenswrapper[4873]: I0219 10:12:01.977492 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.090182 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w65h5"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.106183 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-f5jnw"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.119362 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-73d0-account-create-update-vwj8q"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.154786 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w65h5"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.166769 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-45f0-account-create-update-b4rvj"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.207387 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-63c5-account-create-update-rqftk"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.262431 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nfk5h"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.283738 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-63c5-account-create-update-rqftk"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.307237 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nfk5h"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.321266 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-45f0-account-create-update-b4rvj"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.338924 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-f5jnw"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.351514 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-73d0-account-create-update-vwj8q"] Feb 19 10:12:02 crc kubenswrapper[4873]: I0219 10:12:02.574532 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.200168 4873 generic.go:334] "Generic (PLEG): container finished" podID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerID="5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd" exitCode=0 Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.200563 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerDied","Data":"5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd"} Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.200602 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerStarted","Data":"cc423b5accdb7784192d7c3d495f5ef39826afe8978f935d56b7fa1b762d6316"} Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.496474 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679f69ef-9960-4e33-a6aa-09baefabc417" path="/var/lib/kubelet/pods/679f69ef-9960-4e33-a6aa-09baefabc417/volumes" Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.498442 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c35b26-7dc1-4cea-bbe7-53a9e47df7ba" path="/var/lib/kubelet/pods/94c35b26-7dc1-4cea-bbe7-53a9e47df7ba/volumes" Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.500399 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0daf0d-c150-49de-98af-3f65dd78112f" path="/var/lib/kubelet/pods/bf0daf0d-c150-49de-98af-3f65dd78112f/volumes" Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.500965 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f97f25-d006-40d7-a090-ab45ab11b282" path="/var/lib/kubelet/pods/e1f97f25-d006-40d7-a090-ab45ab11b282/volumes" Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.504605 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec120760-bb10-44ff-bbb0-ed1665b4e17b" path="/var/lib/kubelet/pods/ec120760-bb10-44ff-bbb0-ed1665b4e17b/volumes" Feb 19 10:12:03 crc kubenswrapper[4873]: I0219 10:12:03.506456 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7769ae-caf0-4f62-be96-90d6fa334259" path="/var/lib/kubelet/pods/fd7769ae-caf0-4f62-be96-90d6fa334259/volumes" Feb 19 10:12:04 crc kubenswrapper[4873]: I0219 10:12:04.484457 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:12:04 crc kubenswrapper[4873]: E0219 10:12:04.485229 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:12:05 crc kubenswrapper[4873]: I0219 10:12:05.226308 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerStarted","Data":"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82"} Feb 19 10:12:08 crc kubenswrapper[4873]: I0219 10:12:08.260877 4873 generic.go:334] "Generic (PLEG): container finished" podID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerID="e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82" exitCode=0 Feb 19 10:12:08 crc kubenswrapper[4873]: I0219 10:12:08.260947 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerDied","Data":"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82"} Feb 19 10:12:09 crc kubenswrapper[4873]: I0219 10:12:09.036979 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-k6j2h"] Feb 19 10:12:09 crc kubenswrapper[4873]: I0219 10:12:09.048476 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-k6j2h"] Feb 19 10:12:09 crc kubenswrapper[4873]: I0219 10:12:09.496397 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a075072a-1153-4963-91c7-e9e2aa08f988" path="/var/lib/kubelet/pods/a075072a-1153-4963-91c7-e9e2aa08f988/volumes" Feb 19 10:12:10 crc kubenswrapper[4873]: I0219 10:12:10.028621 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-86n9s"] Feb 19 10:12:10 crc kubenswrapper[4873]: I0219 10:12:10.037157 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-86n9s"] Feb 19 10:12:10 crc kubenswrapper[4873]: I0219 10:12:10.283239 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerStarted","Data":"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4"} Feb 19 10:12:10 crc kubenswrapper[4873]: I0219 10:12:10.305812 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z6dkm" podStartSLOduration=3.039666542 podStartE2EDuration="9.305789728s" podCreationTimestamp="2026-02-19 10:12:01 +0000 UTC" firstStartedPulling="2026-02-19 10:12:03.202526993 +0000 UTC m=+1632.491958631" lastFinishedPulling="2026-02-19 10:12:09.468650179 +0000 UTC m=+1638.758081817" observedRunningTime="2026-02-19 10:12:10.302617528 +0000 UTC m=+1639.592049186" watchObservedRunningTime="2026-02-19 10:12:10.305789728 +0000 UTC m=+1639.595221366" Feb 19 10:12:11 crc kubenswrapper[4873]: I0219 10:12:11.496308 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735c003d-082d-431f-9906-20c8946f1bf4" path="/var/lib/kubelet/pods/735c003d-082d-431f-9906-20c8946f1bf4/volumes" Feb 19 10:12:11 crc kubenswrapper[4873]: I0219 10:12:11.978394 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:11 crc kubenswrapper[4873]: I0219 10:12:11.978456 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:13 crc kubenswrapper[4873]: I0219 10:12:13.025795 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z6dkm" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="registry-server" probeResult="failure" output=< Feb 19 10:12:13 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:12:13 crc kubenswrapper[4873]: > Feb 19 10:12:14 crc kubenswrapper[4873]: I0219 10:12:14.321457 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab7d5a49-ac61-4963-8766-1716098f3d4c" containerID="5a71dd5b4760261dd2be0e41411c70e6162e350c368db3bf6d1ef5a664d01e28" exitCode=0 Feb 19 10:12:14 crc kubenswrapper[4873]: I0219 10:12:14.321543 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" event={"ID":"ab7d5a49-ac61-4963-8766-1716098f3d4c","Type":"ContainerDied","Data":"5a71dd5b4760261dd2be0e41411c70e6162e350c368db3bf6d1ef5a664d01e28"} Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.752733 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.808794 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam\") pod \"ab7d5a49-ac61-4963-8766-1716098f3d4c\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.808967 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") pod \"ab7d5a49-ac61-4963-8766-1716098f3d4c\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.809004 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj8jp\" (UniqueName: \"kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp\") pod \"ab7d5a49-ac61-4963-8766-1716098f3d4c\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.815374 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp" (OuterVolumeSpecName: "kube-api-access-cj8jp") pod "ab7d5a49-ac61-4963-8766-1716098f3d4c" (UID: "ab7d5a49-ac61-4963-8766-1716098f3d4c"). InnerVolumeSpecName "kube-api-access-cj8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:15 crc kubenswrapper[4873]: E0219 10:12:15.848517 4873 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory podName:ab7d5a49-ac61-4963-8766-1716098f3d4c nodeName:}" failed. No retries permitted until 2026-02-19 10:12:16.348489985 +0000 UTC m=+1645.637921623 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory") pod "ab7d5a49-ac61-4963-8766-1716098f3d4c" (UID: "ab7d5a49-ac61-4963-8766-1716098f3d4c") : error deleting /var/lib/kubelet/pods/ab7d5a49-ac61-4963-8766-1716098f3d4c/volume-subpaths: remove /var/lib/kubelet/pods/ab7d5a49-ac61-4963-8766-1716098f3d4c/volume-subpaths: no such file or directory Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.852676 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab7d5a49-ac61-4963-8766-1716098f3d4c" (UID: "ab7d5a49-ac61-4963-8766-1716098f3d4c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.911307 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:15 crc kubenswrapper[4873]: I0219 10:12:15.911344 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj8jp\" (UniqueName: \"kubernetes.io/projected/ab7d5a49-ac61-4963-8766-1716098f3d4c-kube-api-access-cj8jp\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.340154 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" event={"ID":"ab7d5a49-ac61-4963-8766-1716098f3d4c","Type":"ContainerDied","Data":"104a5f4bca47feaf4bdaf13262521e9dce1f7e82a92383ad952b55930e0a5622"} Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.340520 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="104a5f4bca47feaf4bdaf13262521e9dce1f7e82a92383ad952b55930e0a5622" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.340213 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.419493 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") pod \"ab7d5a49-ac61-4963-8766-1716098f3d4c\" (UID: \"ab7d5a49-ac61-4963-8766-1716098f3d4c\") " Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.423942 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory" (OuterVolumeSpecName: "inventory") pod "ab7d5a49-ac61-4963-8766-1716098f3d4c" (UID: "ab7d5a49-ac61-4963-8766-1716098f3d4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.441687 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b"] Feb 19 10:12:16 crc kubenswrapper[4873]: E0219 10:12:16.442122 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7d5a49-ac61-4963-8766-1716098f3d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.442138 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7d5a49-ac61-4963-8766-1716098f3d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.442364 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7d5a49-ac61-4963-8766-1716098f3d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.443165 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.464988 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b"] Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.522628 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.522757 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.522792 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ntc\" (UniqueName: \"kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.524323 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab7d5a49-ac61-4963-8766-1716098f3d4c-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.625798 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.625893 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.625924 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ntc\" (UniqueName: \"kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.630454 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.630494 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.642334 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ntc\" (UniqueName: \"kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-snp5b\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:16 crc kubenswrapper[4873]: I0219 10:12:16.793303 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:12:17 crc kubenswrapper[4873]: I0219 10:12:17.407689 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b"] Feb 19 10:12:18 crc kubenswrapper[4873]: I0219 10:12:18.361449 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" event={"ID":"f0739ccd-765a-42c4-89b4-de6adf188e24","Type":"ContainerStarted","Data":"0f2cd21fa77984269424706bd9843a6592fdd7f0b6272e7e9dfc6450008c946c"} Feb 19 10:12:19 crc kubenswrapper[4873]: I0219 10:12:19.370987 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" event={"ID":"f0739ccd-765a-42c4-89b4-de6adf188e24","Type":"ContainerStarted","Data":"3920eadc9823cf170b0e3ddab307f81fb31ae3300913abfa6bdf114e6376e26e"} Feb 19 10:12:19 crc kubenswrapper[4873]: I0219 10:12:19.391027 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" podStartSLOduration=2.720497514 podStartE2EDuration="3.391004538s" podCreationTimestamp="2026-02-19 10:12:16 +0000 UTC" firstStartedPulling="2026-02-19 10:12:17.419320673 +0000 UTC m=+1646.708752311" lastFinishedPulling="2026-02-19 10:12:18.089827697 +0000 UTC m=+1647.379259335" observedRunningTime="2026-02-19 10:12:19.385691684 +0000 UTC m=+1648.675123332" watchObservedRunningTime="2026-02-19 10:12:19.391004538 +0000 UTC m=+1648.680436186" Feb 19 10:12:19 crc kubenswrapper[4873]: I0219 10:12:19.483829 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:12:19 crc kubenswrapper[4873]: E0219 10:12:19.484176 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:12:22 crc kubenswrapper[4873]: I0219 10:12:22.029935 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:22 crc kubenswrapper[4873]: I0219 10:12:22.076148 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:22 crc kubenswrapper[4873]: I0219 10:12:22.268854 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:23 crc kubenswrapper[4873]: I0219 10:12:23.405811 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z6dkm" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="registry-server" containerID="cri-o://c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4" gracePeriod=2 Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.161517 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.279845 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g88c8\" (UniqueName: \"kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8\") pod \"d83cc040-5619-46ef-9e78-b7b1f1117e79\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.279941 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content\") pod \"d83cc040-5619-46ef-9e78-b7b1f1117e79\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.280032 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities\") pod \"d83cc040-5619-46ef-9e78-b7b1f1117e79\" (UID: \"d83cc040-5619-46ef-9e78-b7b1f1117e79\") " Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.280956 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities" (OuterVolumeSpecName: "utilities") pod "d83cc040-5619-46ef-9e78-b7b1f1117e79" (UID: "d83cc040-5619-46ef-9e78-b7b1f1117e79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.281130 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.292338 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8" (OuterVolumeSpecName: "kube-api-access-g88c8") pod "d83cc040-5619-46ef-9e78-b7b1f1117e79" (UID: "d83cc040-5619-46ef-9e78-b7b1f1117e79"). InnerVolumeSpecName "kube-api-access-g88c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.382507 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g88c8\" (UniqueName: \"kubernetes.io/projected/d83cc040-5619-46ef-9e78-b7b1f1117e79-kube-api-access-g88c8\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.404286 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83cc040-5619-46ef-9e78-b7b1f1117e79" (UID: "d83cc040-5619-46ef-9e78-b7b1f1117e79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.417613 4873 generic.go:334] "Generic (PLEG): container finished" podID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerID="c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4" exitCode=0 Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.417652 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerDied","Data":"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4"} Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.417679 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z6dkm" event={"ID":"d83cc040-5619-46ef-9e78-b7b1f1117e79","Type":"ContainerDied","Data":"cc423b5accdb7784192d7c3d495f5ef39826afe8978f935d56b7fa1b762d6316"} Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.417700 4873 scope.go:117] "RemoveContainer" containerID="c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.417706 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z6dkm" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.458078 4873 scope.go:117] "RemoveContainer" containerID="e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.465284 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.477797 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z6dkm"] Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.485134 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83cc040-5619-46ef-9e78-b7b1f1117e79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.500076 4873 scope.go:117] "RemoveContainer" containerID="5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.538549 4873 scope.go:117] "RemoveContainer" containerID="c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4" Feb 19 10:12:24 crc kubenswrapper[4873]: E0219 10:12:24.539266 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4\": container with ID starting with c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4 not found: ID does not exist" containerID="c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.539306 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4"} err="failed to get container status \"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4\": rpc error: code = NotFound desc = could not find container \"c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4\": container with ID starting with c09685492dfc4d5d29f71ac9d4f3a6d2840be369e5f6cf0d017ea695c5bf24b4 not found: ID does not exist" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.539359 4873 scope.go:117] "RemoveContainer" containerID="e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82" Feb 19 10:12:24 crc kubenswrapper[4873]: E0219 10:12:24.539870 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82\": container with ID starting with e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82 not found: ID does not exist" containerID="e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.539935 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82"} err="failed to get container status \"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82\": rpc error: code = NotFound desc = could not find container \"e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82\": container with ID starting with e89706f794f3858742735b50534c9d6b0f3f672b1c1c6818cde4477bbe445d82 not found: ID does not exist" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.539963 4873 scope.go:117] "RemoveContainer" containerID="5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd" Feb 19 10:12:24 crc kubenswrapper[4873]: E0219 10:12:24.540295 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd\": container with ID starting with 5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd not found: ID does not exist" containerID="5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd" Feb 19 10:12:24 crc kubenswrapper[4873]: I0219 10:12:24.540321 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd"} err="failed to get container status \"5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd\": rpc error: code = NotFound desc = could not find container \"5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd\": container with ID starting with 5b4b0b2b44266488a6c72bad150fbb6843f17c14bdc929c263ca20c5920041bd not found: ID does not exist" Feb 19 10:12:25 crc kubenswrapper[4873]: I0219 10:12:25.496864 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" path="/var/lib/kubelet/pods/d83cc040-5619-46ef-9e78-b7b1f1117e79/volumes" Feb 19 10:12:32 crc kubenswrapper[4873]: I0219 10:12:32.045474 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9472r"] Feb 19 10:12:32 crc kubenswrapper[4873]: I0219 10:12:32.056430 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9472r"] Feb 19 10:12:32 crc kubenswrapper[4873]: I0219 10:12:32.484920 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:12:32 crc kubenswrapper[4873]: E0219 10:12:32.485204 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:12:33 crc kubenswrapper[4873]: I0219 10:12:33.495960 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba" path="/var/lib/kubelet/pods/7ce0a8b9-b7a8-4ee7-8d68-0e6145ada6ba/volumes" Feb 19 10:12:47 crc kubenswrapper[4873]: I0219 10:12:47.485052 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:12:47 crc kubenswrapper[4873]: E0219 10:12:47.485981 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:12:55 crc kubenswrapper[4873]: I0219 10:12:55.055301 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wrcpc"] Feb 19 10:12:55 crc kubenswrapper[4873]: I0219 10:12:55.068079 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wrcpc"] Feb 19 10:12:55 crc kubenswrapper[4873]: I0219 10:12:55.502998 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58099bc8-1a29-467b-b13d-c0713e42e6c2" path="/var/lib/kubelet/pods/58099bc8-1a29-467b-b13d-c0713e42e6c2/volumes" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.178342 4873 scope.go:117] "RemoveContainer" containerID="c237b902a14f464df461cba85ac2f3875c00ea9082d53eccd8620f9ff36dbdf1" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.219376 4873 scope.go:117] "RemoveContainer" containerID="270c7b1e210c60f9930081568a2a368a094d153be46a4131b2c800c6cabb0758" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.287752 4873 scope.go:117] "RemoveContainer" containerID="f7fcab32e5de37523d8bdcbeaad1ae0eeef4c93525ac32d44d6c45730e393e7a" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.328567 4873 scope.go:117] "RemoveContainer" containerID="fc653719445b1a5b46a480337ffb17668e9e8b070f487de6554f5c2e305620c3" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.376924 4873 scope.go:117] "RemoveContainer" containerID="7d9cb5cecd99aa90e0a6558ac0a3e7fa7ae0c94550c983a65f7942335964abac" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.428665 4873 scope.go:117] "RemoveContainer" containerID="e5eec3e87329724888651bc35b53713711df95ef48801142ac4dd2488284d91d" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.488755 4873 scope.go:117] "RemoveContainer" containerID="c2bf0fbba2f57f774b52845ad4c11caabc24a49278d94ff2cd48b137e6aeb541" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.543850 4873 scope.go:117] "RemoveContainer" containerID="d5e694f492487d42cde9591670ae968a23c3523d631e5fc12b8c943bcbf3ca29" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.592665 4873 scope.go:117] "RemoveContainer" containerID="88965eb31897e7c9f4b9aa04da422e3396b97ead67a5f74aaa92bd82cf049dc5" Feb 19 10:12:57 crc kubenswrapper[4873]: I0219 10:12:57.629073 4873 scope.go:117] "RemoveContainer" containerID="a196b181363d1056b517a86bc31a20f9a28399d782296cb45561ba646a621a77" Feb 19 10:12:58 crc kubenswrapper[4873]: I0219 10:12:58.036041 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-98gbw"] Feb 19 10:12:58 crc kubenswrapper[4873]: I0219 10:12:58.045237 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-98gbw"] Feb 19 10:12:59 crc kubenswrapper[4873]: I0219 10:12:59.034309 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4pv5z"] Feb 19 10:12:59 crc kubenswrapper[4873]: I0219 10:12:59.044957 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4pv5z"] Feb 19 10:12:59 crc kubenswrapper[4873]: I0219 10:12:59.496179 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943d069e-6ad4-4411-b937-c4499f0ced6f" path="/var/lib/kubelet/pods/943d069e-6ad4-4411-b937-c4499f0ced6f/volumes" Feb 19 10:12:59 crc kubenswrapper[4873]: I0219 10:12:59.498695 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5489a2-23e2-4875-a19b-d15b4ad6c8c6" path="/var/lib/kubelet/pods/ec5489a2-23e2-4875-a19b-d15b4ad6c8c6/volumes" Feb 19 10:13:01 crc kubenswrapper[4873]: I0219 10:13:01.493314 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:13:01 crc kubenswrapper[4873]: E0219 10:13:01.493593 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:13:02 crc kubenswrapper[4873]: I0219 10:13:02.031734 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vf762"] Feb 19 10:13:02 crc kubenswrapper[4873]: I0219 10:13:02.040693 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vf762"] Feb 19 10:13:03 crc kubenswrapper[4873]: I0219 10:13:03.500065 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99868e3f-82d7-4f0c-9056-661e95486e6e" path="/var/lib/kubelet/pods/99868e3f-82d7-4f0c-9056-661e95486e6e/volumes" Feb 19 10:13:13 crc kubenswrapper[4873]: I0219 10:13:13.028980 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gqrb5"] Feb 19 10:13:13 crc kubenswrapper[4873]: I0219 10:13:13.040702 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gqrb5"] Feb 19 10:13:13 crc kubenswrapper[4873]: I0219 10:13:13.484480 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:13:13 crc kubenswrapper[4873]: E0219 10:13:13.484955 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:13:13 crc kubenswrapper[4873]: I0219 10:13:13.499140 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5accb4-1da0-4a21-a289-7dba33ad935f" path="/var/lib/kubelet/pods/ce5accb4-1da0-4a21-a289-7dba33ad935f/volumes" Feb 19 10:13:27 crc kubenswrapper[4873]: I0219 10:13:27.484137 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:13:27 crc kubenswrapper[4873]: E0219 10:13:27.484965 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:13:28 crc kubenswrapper[4873]: I0219 10:13:28.047938 4873 generic.go:334] "Generic (PLEG): container finished" podID="f0739ccd-765a-42c4-89b4-de6adf188e24" containerID="3920eadc9823cf170b0e3ddab307f81fb31ae3300913abfa6bdf114e6376e26e" exitCode=0 Feb 19 10:13:28 crc kubenswrapper[4873]: I0219 10:13:28.048005 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" event={"ID":"f0739ccd-765a-42c4-89b4-de6adf188e24","Type":"ContainerDied","Data":"3920eadc9823cf170b0e3ddab307f81fb31ae3300913abfa6bdf114e6376e26e"} Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.486519 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.627419 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam\") pod \"f0739ccd-765a-42c4-89b4-de6adf188e24\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.627767 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory\") pod \"f0739ccd-765a-42c4-89b4-de6adf188e24\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.628143 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6ntc\" (UniqueName: \"kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc\") pod \"f0739ccd-765a-42c4-89b4-de6adf188e24\" (UID: \"f0739ccd-765a-42c4-89b4-de6adf188e24\") " Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.633490 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc" (OuterVolumeSpecName: "kube-api-access-j6ntc") pod "f0739ccd-765a-42c4-89b4-de6adf188e24" (UID: "f0739ccd-765a-42c4-89b4-de6adf188e24"). InnerVolumeSpecName "kube-api-access-j6ntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.657250 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0739ccd-765a-42c4-89b4-de6adf188e24" (UID: "f0739ccd-765a-42c4-89b4-de6adf188e24"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.658728 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory" (OuterVolumeSpecName: "inventory") pod "f0739ccd-765a-42c4-89b4-de6adf188e24" (UID: "f0739ccd-765a-42c4-89b4-de6adf188e24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.732133 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.732165 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0739ccd-765a-42c4-89b4-de6adf188e24-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:29 crc kubenswrapper[4873]: I0219 10:13:29.732174 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6ntc\" (UniqueName: \"kubernetes.io/projected/f0739ccd-765a-42c4-89b4-de6adf188e24-kube-api-access-j6ntc\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.067408 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" event={"ID":"f0739ccd-765a-42c4-89b4-de6adf188e24","Type":"ContainerDied","Data":"0f2cd21fa77984269424706bd9843a6592fdd7f0b6272e7e9dfc6450008c946c"} Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.067460 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2cd21fa77984269424706bd9843a6592fdd7f0b6272e7e9dfc6450008c946c" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.067486 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-snp5b" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.155401 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh"] Feb 19 10:13:30 crc kubenswrapper[4873]: E0219 10:13:30.155828 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0739ccd-765a-42c4-89b4-de6adf188e24" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.155850 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0739ccd-765a-42c4-89b4-de6adf188e24" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:30 crc kubenswrapper[4873]: E0219 10:13:30.155898 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="registry-server" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.155907 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="registry-server" Feb 19 10:13:30 crc kubenswrapper[4873]: E0219 10:13:30.155917 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="extract-content" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.155924 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="extract-content" Feb 19 10:13:30 crc kubenswrapper[4873]: E0219 10:13:30.155942 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="extract-utilities" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.155950 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="extract-utilities" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.158237 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0739ccd-765a-42c4-89b4-de6adf188e24" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.158448 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83cc040-5619-46ef-9e78-b7b1f1117e79" containerName="registry-server" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.159915 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.162324 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.162429 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.163039 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.163192 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.178502 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh"] Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.240510 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.240566 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.240667 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767x5\" (UniqueName: \"kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.342696 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.343172 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.343219 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767x5\" (UniqueName: \"kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.352934 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.363070 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.377524 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767x5\" (UniqueName: \"kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:30 crc kubenswrapper[4873]: I0219 10:13:30.479522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:31 crc kubenswrapper[4873]: I0219 10:13:31.082828 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh"] Feb 19 10:13:31 crc kubenswrapper[4873]: W0219 10:13:31.093163 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f40398_582f_40ed_92b8_2ff5a19d138d.slice/crio-9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1 WatchSource:0}: Error finding container 9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1: Status 404 returned error can't find the container with id 9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1 Feb 19 10:13:32 crc kubenswrapper[4873]: I0219 10:13:32.089526 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" event={"ID":"28f40398-582f-40ed-92b8-2ff5a19d138d","Type":"ContainerStarted","Data":"db6a1a39d537566ce77f3f1bcf766e8f8b64a23fdef4067758d74440b926bcbf"} Feb 19 10:13:32 crc kubenswrapper[4873]: I0219 10:13:32.089900 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" event={"ID":"28f40398-582f-40ed-92b8-2ff5a19d138d","Type":"ContainerStarted","Data":"9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1"} Feb 19 10:13:32 crc kubenswrapper[4873]: I0219 10:13:32.112244 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" podStartSLOduration=1.7246795050000001 podStartE2EDuration="2.112219667s" podCreationTimestamp="2026-02-19 10:13:30 +0000 UTC" firstStartedPulling="2026-02-19 10:13:31.095896145 +0000 UTC m=+1720.385327783" lastFinishedPulling="2026-02-19 10:13:31.483436307 +0000 UTC m=+1720.772867945" observedRunningTime="2026-02-19 10:13:32.105394695 +0000 UTC m=+1721.394826333" watchObservedRunningTime="2026-02-19 10:13:32.112219667 +0000 UTC m=+1721.401651315" Feb 19 10:13:37 crc kubenswrapper[4873]: I0219 10:13:37.133524 4873 generic.go:334] "Generic (PLEG): container finished" podID="28f40398-582f-40ed-92b8-2ff5a19d138d" containerID="db6a1a39d537566ce77f3f1bcf766e8f8b64a23fdef4067758d74440b926bcbf" exitCode=0 Feb 19 10:13:37 crc kubenswrapper[4873]: I0219 10:13:37.133613 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" event={"ID":"28f40398-582f-40ed-92b8-2ff5a19d138d","Type":"ContainerDied","Data":"db6a1a39d537566ce77f3f1bcf766e8f8b64a23fdef4067758d74440b926bcbf"} Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.569492 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.668598 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam\") pod \"28f40398-582f-40ed-92b8-2ff5a19d138d\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.668815 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory\") pod \"28f40398-582f-40ed-92b8-2ff5a19d138d\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.668876 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767x5\" (UniqueName: \"kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5\") pod \"28f40398-582f-40ed-92b8-2ff5a19d138d\" (UID: \"28f40398-582f-40ed-92b8-2ff5a19d138d\") " Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.676389 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5" (OuterVolumeSpecName: "kube-api-access-767x5") pod "28f40398-582f-40ed-92b8-2ff5a19d138d" (UID: "28f40398-582f-40ed-92b8-2ff5a19d138d"). InnerVolumeSpecName "kube-api-access-767x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.703429 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28f40398-582f-40ed-92b8-2ff5a19d138d" (UID: "28f40398-582f-40ed-92b8-2ff5a19d138d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.703853 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory" (OuterVolumeSpecName: "inventory") pod "28f40398-582f-40ed-92b8-2ff5a19d138d" (UID: "28f40398-582f-40ed-92b8-2ff5a19d138d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.772462 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.772585 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767x5\" (UniqueName: \"kubernetes.io/projected/28f40398-582f-40ed-92b8-2ff5a19d138d-kube-api-access-767x5\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:38 crc kubenswrapper[4873]: I0219 10:13:38.772655 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28f40398-582f-40ed-92b8-2ff5a19d138d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.153666 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" event={"ID":"28f40398-582f-40ed-92b8-2ff5a19d138d","Type":"ContainerDied","Data":"9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1"} Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.153991 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5470c7d29ded12e8a197c8c4cfe7095c9f9b3f53492033dd1b642c2da889c1" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.153735 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.232798 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj"] Feb 19 10:13:39 crc kubenswrapper[4873]: E0219 10:13:39.233282 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f40398-582f-40ed-92b8-2ff5a19d138d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.233303 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f40398-582f-40ed-92b8-2ff5a19d138d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.233510 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f40398-582f-40ed-92b8-2ff5a19d138d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.234331 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.237370 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.239772 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.239798 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.240175 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.247659 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj"] Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.384468 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.384685 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbvd\" (UniqueName: \"kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.384737 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.486550 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbvd\" (UniqueName: \"kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.486646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.486739 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.493698 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.493862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.522917 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbvd\" (UniqueName: \"kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s2jwj\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:39 crc kubenswrapper[4873]: I0219 10:13:39.552130 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:13:40 crc kubenswrapper[4873]: I0219 10:13:40.072843 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj"] Feb 19 10:13:40 crc kubenswrapper[4873]: I0219 10:13:40.163233 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" event={"ID":"4b127e45-b09c-4e11-9423-58f1f51effd4","Type":"ContainerStarted","Data":"43fe58502bbf464c2caec60cd36406ffe3030aecb4ac2b178dc13a054c2d3c80"} Feb 19 10:13:41 crc kubenswrapper[4873]: I0219 10:13:41.173142 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" event={"ID":"4b127e45-b09c-4e11-9423-58f1f51effd4","Type":"ContainerStarted","Data":"10b81f24298519dc0db6df1a4cede50ee4703691c259651a692c79216d96e98e"} Feb 19 10:13:41 crc kubenswrapper[4873]: I0219 10:13:41.197145 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" podStartSLOduration=1.740420973 podStartE2EDuration="2.197124799s" podCreationTimestamp="2026-02-19 10:13:39 +0000 UTC" firstStartedPulling="2026-02-19 10:13:40.081938063 +0000 UTC m=+1729.371369701" lastFinishedPulling="2026-02-19 10:13:40.538641859 +0000 UTC m=+1729.828073527" observedRunningTime="2026-02-19 10:13:41.189298292 +0000 UTC m=+1730.478729930" watchObservedRunningTime="2026-02-19 10:13:41.197124799 +0000 UTC m=+1730.486556437" Feb 19 10:13:42 crc kubenswrapper[4873]: I0219 10:13:42.484314 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:13:42 crc kubenswrapper[4873]: E0219 10:13:42.484826 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:13:48 crc kubenswrapper[4873]: I0219 10:13:48.036495 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-83cd-account-create-update-9h25q"] Feb 19 10:13:48 crc kubenswrapper[4873]: I0219 10:13:48.046836 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-83cd-account-create-update-9h25q"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.041713 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hbt9r"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.057292 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-5862l"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.070377 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cqfhq"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.078515 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d9dc-account-create-update-p9wrt"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.087457 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-00fb-account-create-update-4594l"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.098006 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hbt9r"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.107637 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-00fb-account-create-update-4594l"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.116379 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d9dc-account-create-update-p9wrt"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.123524 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-5862l"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.130879 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cqfhq"] Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.497093 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3129fa03-2686-49af-a434-341b19fb6661" path="/var/lib/kubelet/pods/3129fa03-2686-49af-a434-341b19fb6661/volumes" Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.498097 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c26aa2d-a8f4-4645-a1b6-055cb88e64d6" path="/var/lib/kubelet/pods/3c26aa2d-a8f4-4645-a1b6-055cb88e64d6/volumes" Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.498875 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79bae2a9-56d6-4292-b84b-c346934e5e08" path="/var/lib/kubelet/pods/79bae2a9-56d6-4292-b84b-c346934e5e08/volumes" Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.499623 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d06337-fba1-4b9c-abbc-02f635fd3bdd" path="/var/lib/kubelet/pods/b1d06337-fba1-4b9c-abbc-02f635fd3bdd/volumes" Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.500980 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0" path="/var/lib/kubelet/pods/bcf2377e-6f0a-4dcb-81ff-6ce950b9c3c0/volumes" Feb 19 10:13:49 crc kubenswrapper[4873]: I0219 10:13:49.501743 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7623a19-7720-48a2-9a09-7c1d9d1acf3a" path="/var/lib/kubelet/pods/c7623a19-7720-48a2-9a09-7c1d9d1acf3a/volumes" Feb 19 10:13:56 crc kubenswrapper[4873]: I0219 10:13:56.484627 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:13:56 crc kubenswrapper[4873]: E0219 10:13:56.485625 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:13:57 crc kubenswrapper[4873]: I0219 10:13:57.928833 4873 scope.go:117] "RemoveContainer" containerID="bd733b3ac9bfee3b4bbee18624c05c5bec301e1fa09008f9d0b7376ff957c31a" Feb 19 10:13:57 crc kubenswrapper[4873]: I0219 10:13:57.964898 4873 scope.go:117] "RemoveContainer" containerID="4f7932193028af20a89fc4d6ec905cbeaeae8f2a0c2eccdd691dcdae0d83a150" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.004818 4873 scope.go:117] "RemoveContainer" containerID="9a04cebf97180c8ea6d0724c6fe0c31aa2fbc8062f300b3608d26c13788862d9" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.054509 4873 scope.go:117] "RemoveContainer" containerID="e32aae1cb5da5f588b5186b7220b1239b5386c9e999d9330ceeb577323a9711c" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.102881 4873 scope.go:117] "RemoveContainer" containerID="e48e8c3f3cd0f5266c11f4e70aa13217161d56e20c834665683a06bd7308e111" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.158442 4873 scope.go:117] "RemoveContainer" containerID="4b38c8bb0d66f0eab5c9674afb8862e41ff03592dbf71798d72618e652e32219" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.204508 4873 scope.go:117] "RemoveContainer" containerID="dd8d0b4c8e6c8fa16639b3273dca3bab2c82aa1c797c85d4fed1f4b2808775ab" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.222646 4873 scope.go:117] "RemoveContainer" containerID="2874d7c078f6aebe4e7f936700ecedd6916a0afc5a2e7ddcc365abe01b70926a" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.244777 4873 scope.go:117] "RemoveContainer" containerID="ad115d69dacdb41d674a33e1db809c1ccc6821733d5a2f7e41e2ae5cb63809b4" Feb 19 10:13:58 crc kubenswrapper[4873]: I0219 10:13:58.282876 4873 scope.go:117] "RemoveContainer" containerID="22b91ea45d57e3f8ed16da3e5a4058c15af39a6d914075c4521ba6755b03990b" Feb 19 10:14:07 crc kubenswrapper[4873]: I0219 10:14:07.487358 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:14:07 crc kubenswrapper[4873]: E0219 10:14:07.491354 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:14:19 crc kubenswrapper[4873]: I0219 10:14:19.485283 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:14:19 crc kubenswrapper[4873]: E0219 10:14:19.485994 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:14:19 crc kubenswrapper[4873]: I0219 10:14:19.545351 4873 generic.go:334] "Generic (PLEG): container finished" podID="4b127e45-b09c-4e11-9423-58f1f51effd4" containerID="10b81f24298519dc0db6df1a4cede50ee4703691c259651a692c79216d96e98e" exitCode=0 Feb 19 10:14:19 crc kubenswrapper[4873]: I0219 10:14:19.545396 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" event={"ID":"4b127e45-b09c-4e11-9423-58f1f51effd4","Type":"ContainerDied","Data":"10b81f24298519dc0db6df1a4cede50ee4703691c259651a692c79216d96e98e"} Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.009870 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.196460 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory\") pod \"4b127e45-b09c-4e11-9423-58f1f51effd4\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.197125 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbvd\" (UniqueName: \"kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd\") pod \"4b127e45-b09c-4e11-9423-58f1f51effd4\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.197204 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam\") pod \"4b127e45-b09c-4e11-9423-58f1f51effd4\" (UID: \"4b127e45-b09c-4e11-9423-58f1f51effd4\") " Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.206308 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd" (OuterVolumeSpecName: "kube-api-access-5qbvd") pod "4b127e45-b09c-4e11-9423-58f1f51effd4" (UID: "4b127e45-b09c-4e11-9423-58f1f51effd4"). InnerVolumeSpecName "kube-api-access-5qbvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.222939 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory" (OuterVolumeSpecName: "inventory") pod "4b127e45-b09c-4e11-9423-58f1f51effd4" (UID: "4b127e45-b09c-4e11-9423-58f1f51effd4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.224844 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b127e45-b09c-4e11-9423-58f1f51effd4" (UID: "4b127e45-b09c-4e11-9423-58f1f51effd4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.299926 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.299961 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbvd\" (UniqueName: \"kubernetes.io/projected/4b127e45-b09c-4e11-9423-58f1f51effd4-kube-api-access-5qbvd\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.299971 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b127e45-b09c-4e11-9423-58f1f51effd4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.564511 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" event={"ID":"4b127e45-b09c-4e11-9423-58f1f51effd4","Type":"ContainerDied","Data":"43fe58502bbf464c2caec60cd36406ffe3030aecb4ac2b178dc13a054c2d3c80"} Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.564562 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fe58502bbf464c2caec60cd36406ffe3030aecb4ac2b178dc13a054c2d3c80" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.564570 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s2jwj" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.659527 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2"] Feb 19 10:14:21 crc kubenswrapper[4873]: E0219 10:14:21.659909 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b127e45-b09c-4e11-9423-58f1f51effd4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.659929 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b127e45-b09c-4e11-9423-58f1f51effd4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.660122 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b127e45-b09c-4e11-9423-58f1f51effd4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.660842 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.662463 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.662649 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.662761 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.664020 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.694511 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2"] Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.808877 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.809151 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.809211 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2vh\" (UniqueName: \"kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.911814 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.912125 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2vh\" (UniqueName: \"kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.912377 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.919275 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.919722 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.929307 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2vh\" (UniqueName: \"kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:21 crc kubenswrapper[4873]: I0219 10:14:21.978201 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:14:22 crc kubenswrapper[4873]: I0219 10:14:22.538255 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2"] Feb 19 10:14:22 crc kubenswrapper[4873]: I0219 10:14:22.574480 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" event={"ID":"40ec1f13-0b91-4c7c-a13e-11e60f55e627","Type":"ContainerStarted","Data":"c0ca2118706be6749415b1d33611a0ef01e91959611996a49fa7429d27412f42"} Feb 19 10:14:23 crc kubenswrapper[4873]: I0219 10:14:23.583854 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" event={"ID":"40ec1f13-0b91-4c7c-a13e-11e60f55e627","Type":"ContainerStarted","Data":"3d4d35c803c524343d799d1966633f50e6268b58e19fac8f6e1497548c00acc7"} Feb 19 10:14:23 crc kubenswrapper[4873]: I0219 10:14:23.605026 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" podStartSLOduration=2.1402025829999998 podStartE2EDuration="2.605008266s" podCreationTimestamp="2026-02-19 10:14:21 +0000 UTC" firstStartedPulling="2026-02-19 10:14:22.544955586 +0000 UTC m=+1771.834387224" lastFinishedPulling="2026-02-19 10:14:23.009761269 +0000 UTC m=+1772.299192907" observedRunningTime="2026-02-19 10:14:23.596074284 +0000 UTC m=+1772.885505922" watchObservedRunningTime="2026-02-19 10:14:23.605008266 +0000 UTC m=+1772.894439904" Feb 19 10:14:24 crc kubenswrapper[4873]: I0219 10:14:24.043303 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgm8t"] Feb 19 10:14:24 crc kubenswrapper[4873]: I0219 10:14:24.054831 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qgm8t"] Feb 19 10:14:25 crc kubenswrapper[4873]: I0219 10:14:25.494897 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8fe617-c1d5-41f8-a23a-eeb88444f620" path="/var/lib/kubelet/pods/2f8fe617-c1d5-41f8-a23a-eeb88444f620/volumes" Feb 19 10:14:32 crc kubenswrapper[4873]: I0219 10:14:32.483667 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:14:32 crc kubenswrapper[4873]: E0219 10:14:32.484194 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:14:46 crc kubenswrapper[4873]: I0219 10:14:46.484949 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:14:46 crc kubenswrapper[4873]: E0219 10:14:46.485657 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:14:48 crc kubenswrapper[4873]: I0219 10:14:48.052591 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xzlnz"] Feb 19 10:14:48 crc kubenswrapper[4873]: I0219 10:14:48.064201 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xzlnz"] Feb 19 10:14:49 crc kubenswrapper[4873]: I0219 10:14:49.496736 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b81c17-9130-4def-8021-e73168601bf6" path="/var/lib/kubelet/pods/54b81c17-9130-4def-8021-e73168601bf6/volumes" Feb 19 10:14:56 crc kubenswrapper[4873]: I0219 10:14:56.024835 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9z5nq"] Feb 19 10:14:56 crc kubenswrapper[4873]: I0219 10:14:56.034253 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9z5nq"] Feb 19 10:14:57 crc kubenswrapper[4873]: I0219 10:14:57.494012 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fca831-509a-4abd-bb7e-2c0f4704368b" path="/var/lib/kubelet/pods/96fca831-509a-4abd-bb7e-2c0f4704368b/volumes" Feb 19 10:14:58 crc kubenswrapper[4873]: I0219 10:14:58.484649 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:14:58 crc kubenswrapper[4873]: E0219 10:14:58.485028 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:14:58 crc kubenswrapper[4873]: I0219 10:14:58.513872 4873 scope.go:117] "RemoveContainer" containerID="343f5f5d97db66db1963a29b53ef93078842c1069756343de5aa869eb8885cd9" Feb 19 10:14:58 crc kubenswrapper[4873]: I0219 10:14:58.560564 4873 scope.go:117] "RemoveContainer" containerID="c163e1fbae8dfb18e81d4177d941e04ca8d149e8d88a196ee094871f3dd31d8c" Feb 19 10:14:58 crc kubenswrapper[4873]: I0219 10:14:58.629833 4873 scope.go:117] "RemoveContainer" containerID="e9c86902c9a53b767e99a6b86b96ed298fea1e2244a0785fd8c7eeb7d4f69fa7" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.148981 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58"] Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.150944 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.153370 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.154380 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.162566 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58"] Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.165221 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.165264 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.165326 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4rh\" (UniqueName: \"kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.268086 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.268185 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.268261 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4rh\" (UniqueName: \"kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.269158 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.277344 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.285865 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4rh\" (UniqueName: \"kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh\") pod \"collect-profiles-29524935-sjf58\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.506654 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:00 crc kubenswrapper[4873]: I0219 10:15:00.969622 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58"] Feb 19 10:15:01 crc kubenswrapper[4873]: I0219 10:15:01.910810 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" event={"ID":"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580","Type":"ContainerStarted","Data":"51b872a4026735697f0f9cc00b395427fbc06efd93a529d94f5319e0d220778e"} Feb 19 10:15:01 crc kubenswrapper[4873]: I0219 10:15:01.911161 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" event={"ID":"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580","Type":"ContainerStarted","Data":"ed5cb2eb001ffb4cf09c9673a7d4a3d78fcb8812596090a28da42ffffec36654"} Feb 19 10:15:01 crc kubenswrapper[4873]: I0219 10:15:01.934406 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" podStartSLOduration=1.9343864229999999 podStartE2EDuration="1.934386423s" podCreationTimestamp="2026-02-19 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:15:01.927700417 +0000 UTC m=+1811.217132045" watchObservedRunningTime="2026-02-19 10:15:01.934386423 +0000 UTC m=+1811.223818071" Feb 19 10:15:02 crc kubenswrapper[4873]: I0219 10:15:02.923346 4873 generic.go:334] "Generic (PLEG): container finished" podID="fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" containerID="51b872a4026735697f0f9cc00b395427fbc06efd93a529d94f5319e0d220778e" exitCode=0 Feb 19 10:15:02 crc kubenswrapper[4873]: I0219 10:15:02.923392 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" event={"ID":"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580","Type":"ContainerDied","Data":"51b872a4026735697f0f9cc00b395427fbc06efd93a529d94f5319e0d220778e"} Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.244762 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.352762 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume\") pod \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.352895 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t4rh\" (UniqueName: \"kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh\") pod \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.352992 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume\") pod \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\" (UID: \"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580\") " Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.353984 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" (UID: "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.358747 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh" (OuterVolumeSpecName: "kube-api-access-5t4rh") pod "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" (UID: "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580"). InnerVolumeSpecName "kube-api-access-5t4rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.359414 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" (UID: "fb4ec2bd-4c16-4682-873a-4fbdcc5d9580"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.455490 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t4rh\" (UniqueName: \"kubernetes.io/projected/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-kube-api-access-5t4rh\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.455527 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.455537 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.939471 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" event={"ID":"fb4ec2bd-4c16-4682-873a-4fbdcc5d9580","Type":"ContainerDied","Data":"ed5cb2eb001ffb4cf09c9673a7d4a3d78fcb8812596090a28da42ffffec36654"} Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.939510 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed5cb2eb001ffb4cf09c9673a7d4a3d78fcb8812596090a28da42ffffec36654" Feb 19 10:15:04 crc kubenswrapper[4873]: I0219 10:15:04.939552 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58" Feb 19 10:15:11 crc kubenswrapper[4873]: I0219 10:15:11.490086 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:15:11 crc kubenswrapper[4873]: E0219 10:15:11.490896 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:15:14 crc kubenswrapper[4873]: I0219 10:15:14.021002 4873 generic.go:334] "Generic (PLEG): container finished" podID="40ec1f13-0b91-4c7c-a13e-11e60f55e627" containerID="3d4d35c803c524343d799d1966633f50e6268b58e19fac8f6e1497548c00acc7" exitCode=0 Feb 19 10:15:14 crc kubenswrapper[4873]: I0219 10:15:14.021067 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" event={"ID":"40ec1f13-0b91-4c7c-a13e-11e60f55e627","Type":"ContainerDied","Data":"3d4d35c803c524343d799d1966633f50e6268b58e19fac8f6e1497548c00acc7"} Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.421519 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.577746 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam\") pod \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.577900 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz2vh\" (UniqueName: \"kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh\") pod \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.577942 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory\") pod \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\" (UID: \"40ec1f13-0b91-4c7c-a13e-11e60f55e627\") " Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.584523 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh" (OuterVolumeSpecName: "kube-api-access-pz2vh") pod "40ec1f13-0b91-4c7c-a13e-11e60f55e627" (UID: "40ec1f13-0b91-4c7c-a13e-11e60f55e627"). InnerVolumeSpecName "kube-api-access-pz2vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.608576 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "40ec1f13-0b91-4c7c-a13e-11e60f55e627" (UID: "40ec1f13-0b91-4c7c-a13e-11e60f55e627"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.631115 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory" (OuterVolumeSpecName: "inventory") pod "40ec1f13-0b91-4c7c-a13e-11e60f55e627" (UID: "40ec1f13-0b91-4c7c-a13e-11e60f55e627"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.681300 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz2vh\" (UniqueName: \"kubernetes.io/projected/40ec1f13-0b91-4c7c-a13e-11e60f55e627-kube-api-access-pz2vh\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.681337 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:15 crc kubenswrapper[4873]: I0219 10:15:15.681353 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40ec1f13-0b91-4c7c-a13e-11e60f55e627-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.045670 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" event={"ID":"40ec1f13-0b91-4c7c-a13e-11e60f55e627","Type":"ContainerDied","Data":"c0ca2118706be6749415b1d33611a0ef01e91959611996a49fa7429d27412f42"} Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.045719 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ca2118706be6749415b1d33611a0ef01e91959611996a49fa7429d27412f42" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.045780 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2" Feb 19 10:15:16 crc kubenswrapper[4873]: E0219 10:15:16.154954 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40ec1f13_0b91_4c7c_a13e_11e60f55e627.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.220060 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sw8hj"] Feb 19 10:15:16 crc kubenswrapper[4873]: E0219 10:15:16.221228 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec1f13-0b91-4c7c-a13e-11e60f55e627" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.221327 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec1f13-0b91-4c7c-a13e-11e60f55e627" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:16 crc kubenswrapper[4873]: E0219 10:15:16.221429 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" containerName="collect-profiles" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.221527 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" containerName="collect-profiles" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.222018 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ec1f13-0b91-4c7c-a13e-11e60f55e627" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.222179 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" containerName="collect-profiles" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.223378 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.225594 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.225763 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.226220 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.226223 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.233010 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sw8hj"] Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.393571 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.393709 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.393809 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px675\" (UniqueName: \"kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.495723 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.495788 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.495821 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px675\" (UniqueName: \"kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.500931 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.501490 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.512915 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px675\" (UniqueName: \"kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675\") pod \"ssh-known-hosts-edpm-deployment-sw8hj\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:16 crc kubenswrapper[4873]: I0219 10:15:16.539242 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:17 crc kubenswrapper[4873]: I0219 10:15:17.077908 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sw8hj"] Feb 19 10:15:18 crc kubenswrapper[4873]: I0219 10:15:18.067536 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" event={"ID":"15999617-f2b4-4a3f-911d-422db799fa37","Type":"ContainerStarted","Data":"13e39142bcaa9f5eb7b0f72d48b915ecda145dd5430ccf135924bd5a76f7486b"} Feb 19 10:15:18 crc kubenswrapper[4873]: I0219 10:15:18.068218 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" event={"ID":"15999617-f2b4-4a3f-911d-422db799fa37","Type":"ContainerStarted","Data":"dedd5cc5a97e635d8fde4448e433d1bd645acff522100d4963d6d3e8d6e972fc"} Feb 19 10:15:18 crc kubenswrapper[4873]: I0219 10:15:18.092614 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" podStartSLOduration=1.700058624 podStartE2EDuration="2.092580958s" podCreationTimestamp="2026-02-19 10:15:16 +0000 UTC" firstStartedPulling="2026-02-19 10:15:17.087756362 +0000 UTC m=+1826.377188010" lastFinishedPulling="2026-02-19 10:15:17.480278706 +0000 UTC m=+1826.769710344" observedRunningTime="2026-02-19 10:15:18.089587033 +0000 UTC m=+1827.379018721" watchObservedRunningTime="2026-02-19 10:15:18.092580958 +0000 UTC m=+1827.382012636" Feb 19 10:15:23 crc kubenswrapper[4873]: I0219 10:15:23.483945 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:15:24 crc kubenswrapper[4873]: I0219 10:15:24.120299 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5"} Feb 19 10:15:25 crc kubenswrapper[4873]: I0219 10:15:25.129945 4873 generic.go:334] "Generic (PLEG): container finished" podID="15999617-f2b4-4a3f-911d-422db799fa37" containerID="13e39142bcaa9f5eb7b0f72d48b915ecda145dd5430ccf135924bd5a76f7486b" exitCode=0 Feb 19 10:15:25 crc kubenswrapper[4873]: I0219 10:15:25.130043 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" event={"ID":"15999617-f2b4-4a3f-911d-422db799fa37","Type":"ContainerDied","Data":"13e39142bcaa9f5eb7b0f72d48b915ecda145dd5430ccf135924bd5a76f7486b"} Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.538237 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.688858 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px675\" (UniqueName: \"kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675\") pod \"15999617-f2b4-4a3f-911d-422db799fa37\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.688951 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam\") pod \"15999617-f2b4-4a3f-911d-422db799fa37\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.689143 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0\") pod \"15999617-f2b4-4a3f-911d-422db799fa37\" (UID: \"15999617-f2b4-4a3f-911d-422db799fa37\") " Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.694714 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675" (OuterVolumeSpecName: "kube-api-access-px675") pod "15999617-f2b4-4a3f-911d-422db799fa37" (UID: "15999617-f2b4-4a3f-911d-422db799fa37"). InnerVolumeSpecName "kube-api-access-px675". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.715755 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "15999617-f2b4-4a3f-911d-422db799fa37" (UID: "15999617-f2b4-4a3f-911d-422db799fa37"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.730865 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15999617-f2b4-4a3f-911d-422db799fa37" (UID: "15999617-f2b4-4a3f-911d-422db799fa37"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.792189 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px675\" (UniqueName: \"kubernetes.io/projected/15999617-f2b4-4a3f-911d-422db799fa37-kube-api-access-px675\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.792276 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:26 crc kubenswrapper[4873]: I0219 10:15:26.792293 4873 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15999617-f2b4-4a3f-911d-422db799fa37-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.148790 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" event={"ID":"15999617-f2b4-4a3f-911d-422db799fa37","Type":"ContainerDied","Data":"dedd5cc5a97e635d8fde4448e433d1bd645acff522100d4963d6d3e8d6e972fc"} Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.148831 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dedd5cc5a97e635d8fde4448e433d1bd645acff522100d4963d6d3e8d6e972fc" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.149163 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sw8hj" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.235445 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf"] Feb 19 10:15:27 crc kubenswrapper[4873]: E0219 10:15:27.235871 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15999617-f2b4-4a3f-911d-422db799fa37" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.235914 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="15999617-f2b4-4a3f-911d-422db799fa37" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.236208 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="15999617-f2b4-4a3f-911d-422db799fa37" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.236860 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.239354 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.239466 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.240093 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.243859 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.259168 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf"] Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.406383 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6bl7\" (UniqueName: \"kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.406473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.406597 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.508096 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.508255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6bl7\" (UniqueName: \"kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.508312 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.513466 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.514630 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.526392 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6bl7\" (UniqueName: \"kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5wvjf\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:27 crc kubenswrapper[4873]: I0219 10:15:27.554829 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:28 crc kubenswrapper[4873]: I0219 10:15:28.088833 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf"] Feb 19 10:15:28 crc kubenswrapper[4873]: W0219 10:15:28.096790 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7843f72c_5559_44d6_86e0_62f013e0a073.slice/crio-bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181 WatchSource:0}: Error finding container bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181: Status 404 returned error can't find the container with id bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181 Feb 19 10:15:28 crc kubenswrapper[4873]: I0219 10:15:28.157776 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" event={"ID":"7843f72c-5559-44d6-86e0-62f013e0a073","Type":"ContainerStarted","Data":"bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181"} Feb 19 10:15:30 crc kubenswrapper[4873]: I0219 10:15:30.228223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" event={"ID":"7843f72c-5559-44d6-86e0-62f013e0a073","Type":"ContainerStarted","Data":"dfffb5690d0bc3ab31c1e460684b09eb768ee575f66111d0e554420440e9c976"} Feb 19 10:15:30 crc kubenswrapper[4873]: I0219 10:15:30.251619 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" podStartSLOduration=2.33593736 podStartE2EDuration="3.251593799s" podCreationTimestamp="2026-02-19 10:15:27 +0000 UTC" firstStartedPulling="2026-02-19 10:15:28.099509622 +0000 UTC m=+1837.388941260" lastFinishedPulling="2026-02-19 10:15:29.015166061 +0000 UTC m=+1838.304597699" observedRunningTime="2026-02-19 10:15:30.2500409 +0000 UTC m=+1839.539472578" watchObservedRunningTime="2026-02-19 10:15:30.251593799 +0000 UTC m=+1839.541025457" Feb 19 10:15:33 crc kubenswrapper[4873]: I0219 10:15:33.055439 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ljn4d"] Feb 19 10:15:33 crc kubenswrapper[4873]: I0219 10:15:33.063400 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ljn4d"] Feb 19 10:15:33 crc kubenswrapper[4873]: I0219 10:15:33.497093 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355c3bd2-5fb4-4a28-be15-e766b61eeed9" path="/var/lib/kubelet/pods/355c3bd2-5fb4-4a28-be15-e766b61eeed9/volumes" Feb 19 10:15:37 crc kubenswrapper[4873]: I0219 10:15:37.290661 4873 generic.go:334] "Generic (PLEG): container finished" podID="7843f72c-5559-44d6-86e0-62f013e0a073" containerID="dfffb5690d0bc3ab31c1e460684b09eb768ee575f66111d0e554420440e9c976" exitCode=0 Feb 19 10:15:37 crc kubenswrapper[4873]: I0219 10:15:37.290731 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" event={"ID":"7843f72c-5559-44d6-86e0-62f013e0a073","Type":"ContainerDied","Data":"dfffb5690d0bc3ab31c1e460684b09eb768ee575f66111d0e554420440e9c976"} Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.749092 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.848851 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6bl7\" (UniqueName: \"kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7\") pod \"7843f72c-5559-44d6-86e0-62f013e0a073\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.849006 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory\") pod \"7843f72c-5559-44d6-86e0-62f013e0a073\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.849174 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam\") pod \"7843f72c-5559-44d6-86e0-62f013e0a073\" (UID: \"7843f72c-5559-44d6-86e0-62f013e0a073\") " Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.854782 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7" (OuterVolumeSpecName: "kube-api-access-f6bl7") pod "7843f72c-5559-44d6-86e0-62f013e0a073" (UID: "7843f72c-5559-44d6-86e0-62f013e0a073"). InnerVolumeSpecName "kube-api-access-f6bl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.878478 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7843f72c-5559-44d6-86e0-62f013e0a073" (UID: "7843f72c-5559-44d6-86e0-62f013e0a073"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.878836 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory" (OuterVolumeSpecName: "inventory") pod "7843f72c-5559-44d6-86e0-62f013e0a073" (UID: "7843f72c-5559-44d6-86e0-62f013e0a073"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.951968 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6bl7\" (UniqueName: \"kubernetes.io/projected/7843f72c-5559-44d6-86e0-62f013e0a073-kube-api-access-f6bl7\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.952051 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:38 crc kubenswrapper[4873]: I0219 10:15:38.952080 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7843f72c-5559-44d6-86e0-62f013e0a073-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.317342 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" event={"ID":"7843f72c-5559-44d6-86e0-62f013e0a073","Type":"ContainerDied","Data":"bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181"} Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.317387 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcbad2c4ee95dd8d3c820ca667fce5604d8315d212f86381338d0901a7881181" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.317449 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5wvjf" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.387857 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj"] Feb 19 10:15:39 crc kubenswrapper[4873]: E0219 10:15:39.388356 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7843f72c-5559-44d6-86e0-62f013e0a073" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.388382 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="7843f72c-5559-44d6-86e0-62f013e0a073" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.388634 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="7843f72c-5559-44d6-86e0-62f013e0a073" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.389505 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.391665 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.398346 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj"] Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.400449 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.400525 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.400665 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.462557 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.462646 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.462755 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szglk\" (UniqueName: \"kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.564348 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szglk\" (UniqueName: \"kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.564541 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.564593 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.571876 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.589440 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szglk\" (UniqueName: \"kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.590511 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:39 crc kubenswrapper[4873]: I0219 10:15:39.707572 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:40 crc kubenswrapper[4873]: I0219 10:15:40.240512 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj"] Feb 19 10:15:40 crc kubenswrapper[4873]: W0219 10:15:40.245777 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod157ee933_b692_4c92_bcbd_967bc1cd377c.slice/crio-8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296 WatchSource:0}: Error finding container 8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296: Status 404 returned error can't find the container with id 8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296 Feb 19 10:15:40 crc kubenswrapper[4873]: I0219 10:15:40.250044 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:15:40 crc kubenswrapper[4873]: I0219 10:15:40.328395 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" event={"ID":"157ee933-b692-4c92-bcbd-967bc1cd377c","Type":"ContainerStarted","Data":"8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296"} Feb 19 10:15:41 crc kubenswrapper[4873]: I0219 10:15:41.338900 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" event={"ID":"157ee933-b692-4c92-bcbd-967bc1cd377c","Type":"ContainerStarted","Data":"82218227e4b04484e348e2f28e8d3f15ccea294e26f028723d97bf71c024e437"} Feb 19 10:15:41 crc kubenswrapper[4873]: I0219 10:15:41.361149 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" podStartSLOduration=1.9329847020000002 podStartE2EDuration="2.361131233s" podCreationTimestamp="2026-02-19 10:15:39 +0000 UTC" firstStartedPulling="2026-02-19 10:15:40.249752595 +0000 UTC m=+1849.539184233" lastFinishedPulling="2026-02-19 10:15:40.677899086 +0000 UTC m=+1849.967330764" observedRunningTime="2026-02-19 10:15:41.35499298 +0000 UTC m=+1850.644424618" watchObservedRunningTime="2026-02-19 10:15:41.361131233 +0000 UTC m=+1850.650562881" Feb 19 10:15:50 crc kubenswrapper[4873]: I0219 10:15:50.430390 4873 generic.go:334] "Generic (PLEG): container finished" podID="157ee933-b692-4c92-bcbd-967bc1cd377c" containerID="82218227e4b04484e348e2f28e8d3f15ccea294e26f028723d97bf71c024e437" exitCode=0 Feb 19 10:15:50 crc kubenswrapper[4873]: I0219 10:15:50.430492 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" event={"ID":"157ee933-b692-4c92-bcbd-967bc1cd377c","Type":"ContainerDied","Data":"82218227e4b04484e348e2f28e8d3f15ccea294e26f028723d97bf71c024e437"} Feb 19 10:15:51 crc kubenswrapper[4873]: I0219 10:15:51.893386 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.023006 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory\") pod \"157ee933-b692-4c92-bcbd-967bc1cd377c\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.023138 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam\") pod \"157ee933-b692-4c92-bcbd-967bc1cd377c\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.023177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szglk\" (UniqueName: \"kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk\") pod \"157ee933-b692-4c92-bcbd-967bc1cd377c\" (UID: \"157ee933-b692-4c92-bcbd-967bc1cd377c\") " Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.028727 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk" (OuterVolumeSpecName: "kube-api-access-szglk") pod "157ee933-b692-4c92-bcbd-967bc1cd377c" (UID: "157ee933-b692-4c92-bcbd-967bc1cd377c"). InnerVolumeSpecName "kube-api-access-szglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.055886 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "157ee933-b692-4c92-bcbd-967bc1cd377c" (UID: "157ee933-b692-4c92-bcbd-967bc1cd377c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.058901 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory" (OuterVolumeSpecName: "inventory") pod "157ee933-b692-4c92-bcbd-967bc1cd377c" (UID: "157ee933-b692-4c92-bcbd-967bc1cd377c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.127019 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.127098 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szglk\" (UniqueName: \"kubernetes.io/projected/157ee933-b692-4c92-bcbd-967bc1cd377c-kube-api-access-szglk\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.127144 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/157ee933-b692-4c92-bcbd-967bc1cd377c-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.453596 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" event={"ID":"157ee933-b692-4c92-bcbd-967bc1cd377c","Type":"ContainerDied","Data":"8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296"} Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.453651 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cfabebb1577b55754b14ab082be646f7ff012789f460922cc89eb4f6a067296" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.453708 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.534422 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6"] Feb 19 10:15:52 crc kubenswrapper[4873]: E0219 10:15:52.535174 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157ee933-b692-4c92-bcbd-967bc1cd377c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.535281 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="157ee933-b692-4c92-bcbd-967bc1cd377c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.535574 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="157ee933-b692-4c92-bcbd-967bc1cd377c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.537724 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.543750 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.543996 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.544227 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.544632 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.545547 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.545852 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.545933 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.546048 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.560405 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6"] Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637186 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637249 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637273 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637396 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637661 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637719 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637804 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.637867 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638095 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638154 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638279 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638369 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57nr4\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.638424 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740775 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740833 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740861 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740887 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740919 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.740945 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741482 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741511 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741540 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57nr4\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741576 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741658 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741687 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741715 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.741752 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.744824 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.744913 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.745702 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.746127 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.747547 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.747573 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.748224 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.748504 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.748541 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.749265 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.749750 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.751189 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.756473 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.758894 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57nr4\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:52 crc kubenswrapper[4873]: I0219 10:15:52.862997 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:15:53 crc kubenswrapper[4873]: W0219 10:15:53.386542 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod537c2ac8_0912_4609_ab4e_760060a78d52.slice/crio-19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797 WatchSource:0}: Error finding container 19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797: Status 404 returned error can't find the container with id 19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797 Feb 19 10:15:53 crc kubenswrapper[4873]: I0219 10:15:53.388169 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6"] Feb 19 10:15:53 crc kubenswrapper[4873]: I0219 10:15:53.461818 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" event={"ID":"537c2ac8-0912-4609-ab4e-760060a78d52","Type":"ContainerStarted","Data":"19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797"} Feb 19 10:15:54 crc kubenswrapper[4873]: I0219 10:15:54.472442 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" event={"ID":"537c2ac8-0912-4609-ab4e-760060a78d52","Type":"ContainerStarted","Data":"663497054b60d32c21bffa4e04903d9e3273292d4a9fb164931190510b2cc955"} Feb 19 10:15:54 crc kubenswrapper[4873]: I0219 10:15:54.496965 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" podStartSLOduration=2.068288588 podStartE2EDuration="2.496931261s" podCreationTimestamp="2026-02-19 10:15:52 +0000 UTC" firstStartedPulling="2026-02-19 10:15:53.388799175 +0000 UTC m=+1862.678230813" lastFinishedPulling="2026-02-19 10:15:53.817441838 +0000 UTC m=+1863.106873486" observedRunningTime="2026-02-19 10:15:54.494204603 +0000 UTC m=+1863.783636261" watchObservedRunningTime="2026-02-19 10:15:54.496931261 +0000 UTC m=+1863.786362889" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.700896 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.704615 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.713407 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.763756 4873 scope.go:117] "RemoveContainer" containerID="087819f431db46b81166e897b21b99194aa2c81f307651c133879685fcb5a03d" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.765227 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.765287 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr67\" (UniqueName: \"kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.765387 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.868381 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.868831 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr67\" (UniqueName: \"kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.868866 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.868890 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.869331 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:58 crc kubenswrapper[4873]: I0219 10:15:58.906626 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr67\" (UniqueName: \"kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67\") pod \"certified-operators-tzxhp\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:59 crc kubenswrapper[4873]: I0219 10:15:59.038808 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:15:59 crc kubenswrapper[4873]: I0219 10:15:59.583862 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:16:00 crc kubenswrapper[4873]: I0219 10:16:00.536785 4873 generic.go:334] "Generic (PLEG): container finished" podID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerID="45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b" exitCode=0 Feb 19 10:16:00 crc kubenswrapper[4873]: I0219 10:16:00.536888 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerDied","Data":"45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b"} Feb 19 10:16:00 crc kubenswrapper[4873]: I0219 10:16:00.537470 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerStarted","Data":"d9e5dcde93903f3ae386da846f1499da1d8e5d8069dd46542cd55299f2b61baa"} Feb 19 10:16:01 crc kubenswrapper[4873]: I0219 10:16:01.547790 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerStarted","Data":"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2"} Feb 19 10:16:02 crc kubenswrapper[4873]: I0219 10:16:02.559594 4873 generic.go:334] "Generic (PLEG): container finished" podID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerID="06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2" exitCode=0 Feb 19 10:16:02 crc kubenswrapper[4873]: I0219 10:16:02.559650 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerDied","Data":"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2"} Feb 19 10:16:03 crc kubenswrapper[4873]: I0219 10:16:03.569727 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerStarted","Data":"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35"} Feb 19 10:16:03 crc kubenswrapper[4873]: I0219 10:16:03.594816 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzxhp" podStartSLOduration=3.171231069 podStartE2EDuration="5.594800029s" podCreationTimestamp="2026-02-19 10:15:58 +0000 UTC" firstStartedPulling="2026-02-19 10:16:00.539410611 +0000 UTC m=+1869.828842249" lastFinishedPulling="2026-02-19 10:16:02.962979571 +0000 UTC m=+1872.252411209" observedRunningTime="2026-02-19 10:16:03.592066741 +0000 UTC m=+1872.881498379" watchObservedRunningTime="2026-02-19 10:16:03.594800029 +0000 UTC m=+1872.884231667" Feb 19 10:16:09 crc kubenswrapper[4873]: I0219 10:16:09.039428 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:09 crc kubenswrapper[4873]: I0219 10:16:09.040070 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:09 crc kubenswrapper[4873]: I0219 10:16:09.094503 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:09 crc kubenswrapper[4873]: I0219 10:16:09.666481 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:10 crc kubenswrapper[4873]: I0219 10:16:10.291038 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:16:11 crc kubenswrapper[4873]: I0219 10:16:11.639248 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzxhp" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="registry-server" containerID="cri-o://e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35" gracePeriod=2 Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.116671 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.150885 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities\") pod \"d3655910-cb6e-4b54-bd68-48c5ba1551df\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.151020 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content\") pod \"d3655910-cb6e-4b54-bd68-48c5ba1551df\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.151267 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmr67\" (UniqueName: \"kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67\") pod \"d3655910-cb6e-4b54-bd68-48c5ba1551df\" (UID: \"d3655910-cb6e-4b54-bd68-48c5ba1551df\") " Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.151924 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities" (OuterVolumeSpecName: "utilities") pod "d3655910-cb6e-4b54-bd68-48c5ba1551df" (UID: "d3655910-cb6e-4b54-bd68-48c5ba1551df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.162417 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67" (OuterVolumeSpecName: "kube-api-access-xmr67") pod "d3655910-cb6e-4b54-bd68-48c5ba1551df" (UID: "d3655910-cb6e-4b54-bd68-48c5ba1551df"). InnerVolumeSpecName "kube-api-access-xmr67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.253509 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.253546 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmr67\" (UniqueName: \"kubernetes.io/projected/d3655910-cb6e-4b54-bd68-48c5ba1551df-kube-api-access-xmr67\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.651482 4873 generic.go:334] "Generic (PLEG): container finished" podID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerID="e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35" exitCode=0 Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.651530 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerDied","Data":"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35"} Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.651555 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzxhp" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.651576 4873 scope.go:117] "RemoveContainer" containerID="e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.651561 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzxhp" event={"ID":"d3655910-cb6e-4b54-bd68-48c5ba1551df","Type":"ContainerDied","Data":"d9e5dcde93903f3ae386da846f1499da1d8e5d8069dd46542cd55299f2b61baa"} Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.677119 4873 scope.go:117] "RemoveContainer" containerID="06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.701865 4873 scope.go:117] "RemoveContainer" containerID="45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.743464 4873 scope.go:117] "RemoveContainer" containerID="e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35" Feb 19 10:16:12 crc kubenswrapper[4873]: E0219 10:16:12.743800 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35\": container with ID starting with e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35 not found: ID does not exist" containerID="e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.743830 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35"} err="failed to get container status \"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35\": rpc error: code = NotFound desc = could not find container \"e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35\": container with ID starting with e67e42fe1b67208dfa37d54ccd67ab6932304c96229032a8dd4a038c11fc7b35 not found: ID does not exist" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.743852 4873 scope.go:117] "RemoveContainer" containerID="06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2" Feb 19 10:16:12 crc kubenswrapper[4873]: E0219 10:16:12.744062 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2\": container with ID starting with 06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2 not found: ID does not exist" containerID="06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.744114 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2"} err="failed to get container status \"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2\": rpc error: code = NotFound desc = could not find container \"06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2\": container with ID starting with 06ee0c5b7947af8e68f40c163a4f6ddeac23664eb557936ef15dc5855e1148f2 not found: ID does not exist" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.744136 4873 scope.go:117] "RemoveContainer" containerID="45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b" Feb 19 10:16:12 crc kubenswrapper[4873]: E0219 10:16:12.744486 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b\": container with ID starting with 45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b not found: ID does not exist" containerID="45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.744545 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b"} err="failed to get container status \"45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b\": rpc error: code = NotFound desc = could not find container \"45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b\": container with ID starting with 45c12349a9b65569282146a1d281f342872959e6cda848dedf88f9d430fd517b not found: ID does not exist" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.812596 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3655910-cb6e-4b54-bd68-48c5ba1551df" (UID: "d3655910-cb6e-4b54-bd68-48c5ba1551df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.864149 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3655910-cb6e-4b54-bd68-48c5ba1551df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.983944 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:16:12 crc kubenswrapper[4873]: I0219 10:16:12.992870 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzxhp"] Feb 19 10:16:13 crc kubenswrapper[4873]: I0219 10:16:13.506914 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" path="/var/lib/kubelet/pods/d3655910-cb6e-4b54-bd68-48c5ba1551df/volumes" Feb 19 10:16:31 crc kubenswrapper[4873]: I0219 10:16:31.833613 4873 generic.go:334] "Generic (PLEG): container finished" podID="537c2ac8-0912-4609-ab4e-760060a78d52" containerID="663497054b60d32c21bffa4e04903d9e3273292d4a9fb164931190510b2cc955" exitCode=0 Feb 19 10:16:31 crc kubenswrapper[4873]: I0219 10:16:31.833708 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" event={"ID":"537c2ac8-0912-4609-ab4e-760060a78d52","Type":"ContainerDied","Data":"663497054b60d32c21bffa4e04903d9e3273292d4a9fb164931190510b2cc955"} Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.287089 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451287 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451383 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451446 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451478 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451506 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.451540 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452173 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452255 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452303 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452348 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452391 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452453 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57nr4\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452499 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.452526 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0\") pod \"537c2ac8-0912-4609-ab4e-760060a78d52\" (UID: \"537c2ac8-0912-4609-ab4e-760060a78d52\") " Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.459191 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.459682 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.460153 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.460197 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.462299 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.463422 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.464835 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.465044 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.466913 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.468004 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4" (OuterVolumeSpecName: "kube-api-access-57nr4") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "kube-api-access-57nr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.468216 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.476982 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.486928 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.491230 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory" (OuterVolumeSpecName: "inventory") pod "537c2ac8-0912-4609-ab4e-760060a78d52" (UID: "537c2ac8-0912-4609-ab4e-760060a78d52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554446 4873 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554477 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57nr4\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-kube-api-access-57nr4\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554508 4873 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554519 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554528 4873 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554536 4873 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554544 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554555 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554563 4873 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554573 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554582 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/537c2ac8-0912-4609-ab4e-760060a78d52-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554591 4873 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554600 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.554607 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/537c2ac8-0912-4609-ab4e-760060a78d52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.850760 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" event={"ID":"537c2ac8-0912-4609-ab4e-760060a78d52","Type":"ContainerDied","Data":"19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797"} Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.850824 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b267f7d9414ea523f984cf39b74596cada12d50893b7c0d4c30582756f8797" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.850829 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.985814 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c"] Feb 19 10:16:33 crc kubenswrapper[4873]: E0219 10:16:33.986214 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537c2ac8-0912-4609-ab4e-760060a78d52" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986227 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="537c2ac8-0912-4609-ab4e-760060a78d52" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:33 crc kubenswrapper[4873]: E0219 10:16:33.986241 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="registry-server" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986248 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="registry-server" Feb 19 10:16:33 crc kubenswrapper[4873]: E0219 10:16:33.986275 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="extract-content" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986280 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="extract-content" Feb 19 10:16:33 crc kubenswrapper[4873]: E0219 10:16:33.986295 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="extract-utilities" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986300 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="extract-utilities" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986477 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="537c2ac8-0912-4609-ab4e-760060a78d52" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.986486 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3655910-cb6e-4b54-bd68-48c5ba1551df" containerName="registry-server" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.987091 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.994903 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.996298 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.996461 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.996636 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:16:33 crc kubenswrapper[4873]: I0219 10:16:33.998947 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.020279 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c"] Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.067358 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.067436 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbt8b\" (UniqueName: \"kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.067546 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.067770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.067929 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.170022 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.170090 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.170180 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.170210 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbt8b\" (UniqueName: \"kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.170256 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.171308 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.175035 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.175173 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.177463 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.187801 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbt8b\" (UniqueName: \"kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dks5c\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.354589 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:16:34 crc kubenswrapper[4873]: I0219 10:16:34.906299 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c"] Feb 19 10:16:35 crc kubenswrapper[4873]: I0219 10:16:35.871169 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" event={"ID":"f5d576b5-56dd-4f9f-b67b-0ee87213ea78","Type":"ContainerStarted","Data":"c7e60ef35ab043c3ac745ac0132cf5af5a966586f74c638682238d470e1f2abe"} Feb 19 10:16:35 crc kubenswrapper[4873]: I0219 10:16:35.871223 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" event={"ID":"f5d576b5-56dd-4f9f-b67b-0ee87213ea78","Type":"ContainerStarted","Data":"bd2f1c663d883cba02fe468ae90a655045cbc334efcc127e3604a7513093b1d3"} Feb 19 10:16:35 crc kubenswrapper[4873]: I0219 10:16:35.899114 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" podStartSLOduration=2.459322563 podStartE2EDuration="2.899084869s" podCreationTimestamp="2026-02-19 10:16:33 +0000 UTC" firstStartedPulling="2026-02-19 10:16:34.910580113 +0000 UTC m=+1904.200011751" lastFinishedPulling="2026-02-19 10:16:35.350342419 +0000 UTC m=+1904.639774057" observedRunningTime="2026-02-19 10:16:35.891434657 +0000 UTC m=+1905.180866295" watchObservedRunningTime="2026-02-19 10:16:35.899084869 +0000 UTC m=+1905.188516497" Feb 19 10:17:45 crc kubenswrapper[4873]: I0219 10:17:45.513616 4873 generic.go:334] "Generic (PLEG): container finished" podID="f5d576b5-56dd-4f9f-b67b-0ee87213ea78" containerID="c7e60ef35ab043c3ac745ac0132cf5af5a966586f74c638682238d470e1f2abe" exitCode=0 Feb 19 10:17:45 crc kubenswrapper[4873]: I0219 10:17:45.513922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" event={"ID":"f5d576b5-56dd-4f9f-b67b-0ee87213ea78","Type":"ContainerDied","Data":"c7e60ef35ab043c3ac745ac0132cf5af5a966586f74c638682238d470e1f2abe"} Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.009378 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.189375 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam\") pod \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.189500 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbt8b\" (UniqueName: \"kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b\") pod \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.189586 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0\") pod \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.189620 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory\") pod \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.189717 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle\") pod \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\" (UID: \"f5d576b5-56dd-4f9f-b67b-0ee87213ea78\") " Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.196370 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f5d576b5-56dd-4f9f-b67b-0ee87213ea78" (UID: "f5d576b5-56dd-4f9f-b67b-0ee87213ea78"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.196448 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b" (OuterVolumeSpecName: "kube-api-access-tbt8b") pod "f5d576b5-56dd-4f9f-b67b-0ee87213ea78" (UID: "f5d576b5-56dd-4f9f-b67b-0ee87213ea78"). InnerVolumeSpecName "kube-api-access-tbt8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.221689 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5d576b5-56dd-4f9f-b67b-0ee87213ea78" (UID: "f5d576b5-56dd-4f9f-b67b-0ee87213ea78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.222084 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory" (OuterVolumeSpecName: "inventory") pod "f5d576b5-56dd-4f9f-b67b-0ee87213ea78" (UID: "f5d576b5-56dd-4f9f-b67b-0ee87213ea78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.222328 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f5d576b5-56dd-4f9f-b67b-0ee87213ea78" (UID: "f5d576b5-56dd-4f9f-b67b-0ee87213ea78"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.292400 4873 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.292431 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.292441 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbt8b\" (UniqueName: \"kubernetes.io/projected/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-kube-api-access-tbt8b\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.292452 4873 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.292463 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5d576b5-56dd-4f9f-b67b-0ee87213ea78-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.530623 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" event={"ID":"f5d576b5-56dd-4f9f-b67b-0ee87213ea78","Type":"ContainerDied","Data":"bd2f1c663d883cba02fe468ae90a655045cbc334efcc127e3604a7513093b1d3"} Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.530657 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dks5c" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.530671 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd2f1c663d883cba02fe468ae90a655045cbc334efcc127e3604a7513093b1d3" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.627091 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb"] Feb 19 10:17:47 crc kubenswrapper[4873]: E0219 10:17:47.627616 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d576b5-56dd-4f9f-b67b-0ee87213ea78" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.627639 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d576b5-56dd-4f9f-b67b-0ee87213ea78" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.627862 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d576b5-56dd-4f9f-b67b-0ee87213ea78" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.628658 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.631120 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.631346 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.631401 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.631479 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.633086 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.635436 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.660945 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb"] Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.700881 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.700952 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.701029 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl925\" (UniqueName: \"kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.701149 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.701245 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.701295 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802074 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802151 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802182 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl925\" (UniqueName: \"kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802223 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802300 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.802332 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.808467 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.809416 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.810360 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.811240 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.820923 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.822139 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl925\" (UniqueName: \"kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:47 crc kubenswrapper[4873]: I0219 10:17:47.957522 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:17:48 crc kubenswrapper[4873]: I0219 10:17:48.240297 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:17:48 crc kubenswrapper[4873]: I0219 10:17:48.240349 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:17:48 crc kubenswrapper[4873]: I0219 10:17:48.478184 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb"] Feb 19 10:17:48 crc kubenswrapper[4873]: I0219 10:17:48.541462 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" event={"ID":"a607f592-ebca-4bf5-9e98-f9e2bc131ff1","Type":"ContainerStarted","Data":"98d6db3ecd221a07d5e3322329f070324703abb980ca281c65ac99b2ade4cb54"} Feb 19 10:17:50 crc kubenswrapper[4873]: I0219 10:17:50.560770 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" event={"ID":"a607f592-ebca-4bf5-9e98-f9e2bc131ff1","Type":"ContainerStarted","Data":"2f4be555673dd1a1b739a8ef2e49bb827fa57d32386544997526fd8f60519a74"} Feb 19 10:17:50 crc kubenswrapper[4873]: I0219 10:17:50.580749 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" podStartSLOduration=2.565319735 podStartE2EDuration="3.580727648s" podCreationTimestamp="2026-02-19 10:17:47 +0000 UTC" firstStartedPulling="2026-02-19 10:17:48.48168526 +0000 UTC m=+1977.771116898" lastFinishedPulling="2026-02-19 10:17:49.497093163 +0000 UTC m=+1978.786524811" observedRunningTime="2026-02-19 10:17:50.575956428 +0000 UTC m=+1979.865388066" watchObservedRunningTime="2026-02-19 10:17:50.580727648 +0000 UTC m=+1979.870159296" Feb 19 10:17:58 crc kubenswrapper[4873]: I0219 10:17:58.874677 4873 scope.go:117] "RemoveContainer" containerID="9d6de769e9d17333501a00980ac56829127a539f53f95bb25cf420c5630db360" Feb 19 10:17:58 crc kubenswrapper[4873]: I0219 10:17:58.897586 4873 scope.go:117] "RemoveContainer" containerID="05b98d967cad1ee13631fa999a1fe6d672e3aca8c59af9321211e972aeea3daf" Feb 19 10:17:58 crc kubenswrapper[4873]: I0219 10:17:58.946154 4873 scope.go:117] "RemoveContainer" containerID="d1ee2b33d1585962e5c2d8f8deb1f53ccd7bfb877b0017578ca6bff8f7dfd26e" Feb 19 10:18:18 crc kubenswrapper[4873]: I0219 10:18:18.240424 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:18 crc kubenswrapper[4873]: I0219 10:18:18.240815 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:39 crc kubenswrapper[4873]: I0219 10:18:39.023431 4873 generic.go:334] "Generic (PLEG): container finished" podID="a607f592-ebca-4bf5-9e98-f9e2bc131ff1" containerID="2f4be555673dd1a1b739a8ef2e49bb827fa57d32386544997526fd8f60519a74" exitCode=0 Feb 19 10:18:39 crc kubenswrapper[4873]: I0219 10:18:39.024026 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" event={"ID":"a607f592-ebca-4bf5-9e98-f9e2bc131ff1","Type":"ContainerDied","Data":"2f4be555673dd1a1b739a8ef2e49bb827fa57d32386544997526fd8f60519a74"} Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.528607 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621251 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl925\" (UniqueName: \"kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621378 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621415 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621495 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621577 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.621704 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle\") pod \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\" (UID: \"a607f592-ebca-4bf5-9e98-f9e2bc131ff1\") " Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.628166 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925" (OuterVolumeSpecName: "kube-api-access-rl925") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "kube-api-access-rl925". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.628524 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.653212 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.660045 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.664923 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.666535 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory" (OuterVolumeSpecName: "inventory") pod "a607f592-ebca-4bf5-9e98-f9e2bc131ff1" (UID: "a607f592-ebca-4bf5-9e98-f9e2bc131ff1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.723919 4873 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.724003 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl925\" (UniqueName: \"kubernetes.io/projected/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-kube-api-access-rl925\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.724015 4873 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.724026 4873 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.724034 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:40 crc kubenswrapper[4873]: I0219 10:18:40.724044 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a607f592-ebca-4bf5-9e98-f9e2bc131ff1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.045367 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" event={"ID":"a607f592-ebca-4bf5-9e98-f9e2bc131ff1","Type":"ContainerDied","Data":"98d6db3ecd221a07d5e3322329f070324703abb980ca281c65ac99b2ade4cb54"} Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.045430 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.045442 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d6db3ecd221a07d5e3322329f070324703abb980ca281c65ac99b2ade4cb54" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.161484 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf"] Feb 19 10:18:41 crc kubenswrapper[4873]: E0219 10:18:41.161942 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a607f592-ebca-4bf5-9e98-f9e2bc131ff1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.161961 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a607f592-ebca-4bf5-9e98-f9e2bc131ff1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.162182 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a607f592-ebca-4bf5-9e98-f9e2bc131ff1" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.162912 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.165416 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.165440 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.166052 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.166191 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.166888 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.187565 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf"] Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.234210 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.234251 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.234286 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.234340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.234395 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667dh\" (UniqueName: \"kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.336505 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.336632 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.337180 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.337277 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.337407 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-667dh\" (UniqueName: \"kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.341337 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.342668 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.345559 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.351447 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.363145 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-667dh\" (UniqueName: \"kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:41 crc kubenswrapper[4873]: I0219 10:18:41.480324 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:18:42 crc kubenswrapper[4873]: I0219 10:18:42.025012 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf"] Feb 19 10:18:42 crc kubenswrapper[4873]: I0219 10:18:42.055481 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" event={"ID":"2baa296e-fb37-4d90-a7e4-68f61006e085","Type":"ContainerStarted","Data":"35511ceb1e2947d3a6dc7c9578f623a90ab43f627e3e44514885076fab05f57a"} Feb 19 10:18:43 crc kubenswrapper[4873]: I0219 10:18:43.068499 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" event={"ID":"2baa296e-fb37-4d90-a7e4-68f61006e085","Type":"ContainerStarted","Data":"150a658f281dc96d724d50ff186b8c8e2240351631746c6a8e775330f61234f8"} Feb 19 10:18:43 crc kubenswrapper[4873]: I0219 10:18:43.092445 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" podStartSLOduration=1.3594201830000001 podStartE2EDuration="2.092417937s" podCreationTimestamp="2026-02-19 10:18:41 +0000 UTC" firstStartedPulling="2026-02-19 10:18:42.034840285 +0000 UTC m=+2031.324271923" lastFinishedPulling="2026-02-19 10:18:42.767838039 +0000 UTC m=+2032.057269677" observedRunningTime="2026-02-19 10:18:43.091228848 +0000 UTC m=+2032.380660526" watchObservedRunningTime="2026-02-19 10:18:43.092417937 +0000 UTC m=+2032.381849585" Feb 19 10:18:48 crc kubenswrapper[4873]: I0219 10:18:48.240878 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:48 crc kubenswrapper[4873]: I0219 10:18:48.241617 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:48 crc kubenswrapper[4873]: I0219 10:18:48.241709 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:18:48 crc kubenswrapper[4873]: I0219 10:18:48.242923 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:18:48 crc kubenswrapper[4873]: I0219 10:18:48.243043 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5" gracePeriod=600 Feb 19 10:18:49 crc kubenswrapper[4873]: I0219 10:18:49.132854 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5" exitCode=0 Feb 19 10:18:49 crc kubenswrapper[4873]: I0219 10:18:49.132927 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5"} Feb 19 10:18:49 crc kubenswrapper[4873]: I0219 10:18:49.133532 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806"} Feb 19 10:18:49 crc kubenswrapper[4873]: I0219 10:18:49.133561 4873 scope.go:117] "RemoveContainer" containerID="a79f1351fda39d47e0aa6a236ce6d5659d161f6ce80f3eb2a7b73e5f69948790" Feb 19 10:20:48 crc kubenswrapper[4873]: I0219 10:20:48.240546 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:20:48 crc kubenswrapper[4873]: I0219 10:20:48.241054 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.190717 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.204753 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.219077 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.392617 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.392687 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.392815 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzjs\" (UniqueName: \"kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.494897 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.494964 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.495053 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzjs\" (UniqueName: \"kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.495884 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.496192 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.514928 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzjs\" (UniqueName: \"kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs\") pod \"community-operators-khxsg\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:08 crc kubenswrapper[4873]: I0219 10:21:08.539956 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:09 crc kubenswrapper[4873]: I0219 10:21:09.094378 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:09 crc kubenswrapper[4873]: I0219 10:21:09.483413 4873 generic.go:334] "Generic (PLEG): container finished" podID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerID="88564ce2a7c708e33071b6f62b0d16c2bb038e8f5355a93d478d75f1ad7f0155" exitCode=0 Feb 19 10:21:09 crc kubenswrapper[4873]: I0219 10:21:09.485530 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:21:09 crc kubenswrapper[4873]: I0219 10:21:09.513549 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerDied","Data":"88564ce2a7c708e33071b6f62b0d16c2bb038e8f5355a93d478d75f1ad7f0155"} Feb 19 10:21:09 crc kubenswrapper[4873]: I0219 10:21:09.513592 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerStarted","Data":"9a6281209346e113f5ffeda37b6972b056be50b23a259933d9933156f1474537"} Feb 19 10:21:10 crc kubenswrapper[4873]: I0219 10:21:10.492165 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerStarted","Data":"a26a20cf4b0b405ee529c00faf971a74104e7443611ca93fcd0adf06c42a09c1"} Feb 19 10:21:11 crc kubenswrapper[4873]: I0219 10:21:11.573354 4873 generic.go:334] "Generic (PLEG): container finished" podID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerID="a26a20cf4b0b405ee529c00faf971a74104e7443611ca93fcd0adf06c42a09c1" exitCode=0 Feb 19 10:21:11 crc kubenswrapper[4873]: I0219 10:21:11.573709 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerDied","Data":"a26a20cf4b0b405ee529c00faf971a74104e7443611ca93fcd0adf06c42a09c1"} Feb 19 10:21:12 crc kubenswrapper[4873]: I0219 10:21:12.583473 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerStarted","Data":"a17f3b6dc772d0a07019802a2bad09bfc1931a1daec0cb2c84a8ce1eca99ee2b"} Feb 19 10:21:12 crc kubenswrapper[4873]: I0219 10:21:12.608710 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khxsg" podStartSLOduration=1.889963015 podStartE2EDuration="4.608693185s" podCreationTimestamp="2026-02-19 10:21:08 +0000 UTC" firstStartedPulling="2026-02-19 10:21:09.485300379 +0000 UTC m=+2178.774732017" lastFinishedPulling="2026-02-19 10:21:12.204030549 +0000 UTC m=+2181.493462187" observedRunningTime="2026-02-19 10:21:12.602839429 +0000 UTC m=+2181.892271097" watchObservedRunningTime="2026-02-19 10:21:12.608693185 +0000 UTC m=+2181.898124833" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.240359 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.240837 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.541025 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.541092 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.617048 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.711914 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:18 crc kubenswrapper[4873]: I0219 10:21:18.856712 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:20 crc kubenswrapper[4873]: I0219 10:21:20.680790 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khxsg" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="registry-server" containerID="cri-o://a17f3b6dc772d0a07019802a2bad09bfc1931a1daec0cb2c84a8ce1eca99ee2b" gracePeriod=2 Feb 19 10:21:21 crc kubenswrapper[4873]: I0219 10:21:21.695182 4873 generic.go:334] "Generic (PLEG): container finished" podID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerID="a17f3b6dc772d0a07019802a2bad09bfc1931a1daec0cb2c84a8ce1eca99ee2b" exitCode=0 Feb 19 10:21:21 crc kubenswrapper[4873]: I0219 10:21:21.695264 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerDied","Data":"a17f3b6dc772d0a07019802a2bad09bfc1931a1daec0cb2c84a8ce1eca99ee2b"} Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.400737 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.476500 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzjs\" (UniqueName: \"kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs\") pod \"f991af75-df09-4a06-bb57-78ab59b5ad7e\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.476624 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities\") pod \"f991af75-df09-4a06-bb57-78ab59b5ad7e\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.476738 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content\") pod \"f991af75-df09-4a06-bb57-78ab59b5ad7e\" (UID: \"f991af75-df09-4a06-bb57-78ab59b5ad7e\") " Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.478524 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities" (OuterVolumeSpecName: "utilities") pod "f991af75-df09-4a06-bb57-78ab59b5ad7e" (UID: "f991af75-df09-4a06-bb57-78ab59b5ad7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.486656 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs" (OuterVolumeSpecName: "kube-api-access-8gzjs") pod "f991af75-df09-4a06-bb57-78ab59b5ad7e" (UID: "f991af75-df09-4a06-bb57-78ab59b5ad7e"). InnerVolumeSpecName "kube-api-access-8gzjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.530987 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f991af75-df09-4a06-bb57-78ab59b5ad7e" (UID: "f991af75-df09-4a06-bb57-78ab59b5ad7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.578843 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gzjs\" (UniqueName: \"kubernetes.io/projected/f991af75-df09-4a06-bb57-78ab59b5ad7e-kube-api-access-8gzjs\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.578879 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.578893 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f991af75-df09-4a06-bb57-78ab59b5ad7e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.704858 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khxsg" event={"ID":"f991af75-df09-4a06-bb57-78ab59b5ad7e","Type":"ContainerDied","Data":"9a6281209346e113f5ffeda37b6972b056be50b23a259933d9933156f1474537"} Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.704913 4873 scope.go:117] "RemoveContainer" containerID="a17f3b6dc772d0a07019802a2bad09bfc1931a1daec0cb2c84a8ce1eca99ee2b" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.705041 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khxsg" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.740296 4873 scope.go:117] "RemoveContainer" containerID="a26a20cf4b0b405ee529c00faf971a74104e7443611ca93fcd0adf06c42a09c1" Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.743714 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.752926 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khxsg"] Feb 19 10:21:22 crc kubenswrapper[4873]: I0219 10:21:22.769582 4873 scope.go:117] "RemoveContainer" containerID="88564ce2a7c708e33071b6f62b0d16c2bb038e8f5355a93d478d75f1ad7f0155" Feb 19 10:21:23 crc kubenswrapper[4873]: I0219 10:21:23.495591 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" path="/var/lib/kubelet/pods/f991af75-df09-4a06-bb57-78ab59b5ad7e/volumes" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.240011 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.240659 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.240713 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.241592 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.241662 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" gracePeriod=600 Feb 19 10:21:48 crc kubenswrapper[4873]: E0219 10:21:48.363126 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.946424 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" exitCode=0 Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.946497 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806"} Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.946548 4873 scope.go:117] "RemoveContainer" containerID="4ac19c5656812709276b88acb63ac96a06120ceef6f1ea4e7a6c41a75ff13fe5" Feb 19 10:21:48 crc kubenswrapper[4873]: I0219 10:21:48.947595 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:21:48 crc kubenswrapper[4873]: E0219 10:21:48.948144 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:22:01 crc kubenswrapper[4873]: I0219 10:22:01.494496 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:22:01 crc kubenswrapper[4873]: E0219 10:22:01.495712 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:22:15 crc kubenswrapper[4873]: I0219 10:22:15.484730 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:22:15 crc kubenswrapper[4873]: E0219 10:22:15.485540 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.076142 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:22 crc kubenswrapper[4873]: E0219 10:22:22.077490 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="extract-content" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.077516 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="extract-content" Feb 19 10:22:22 crc kubenswrapper[4873]: E0219 10:22:22.077537 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="extract-utilities" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.077549 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="extract-utilities" Feb 19 10:22:22 crc kubenswrapper[4873]: E0219 10:22:22.077576 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="registry-server" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.077588 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="registry-server" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.077914 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f991af75-df09-4a06-bb57-78ab59b5ad7e" containerName="registry-server" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.080158 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.089279 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.168230 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.168310 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.168901 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvp7\" (UniqueName: \"kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.270932 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.270990 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.271030 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvp7\" (UniqueName: \"kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.271582 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.271743 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.292611 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvp7\" (UniqueName: \"kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7\") pod \"redhat-marketplace-jt52q\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.439180 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.673370 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.678759 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.686046 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.784384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.784529 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4fg2\" (UniqueName: \"kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.784944 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.886575 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.886667 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4fg2\" (UniqueName: \"kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.886822 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.887160 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.887303 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.912687 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4fg2\" (UniqueName: \"kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2\") pod \"redhat-operators-j4bwz\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:22 crc kubenswrapper[4873]: I0219 10:22:22.961370 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:23 crc kubenswrapper[4873]: I0219 10:22:22.999984 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:23 crc kubenswrapper[4873]: I0219 10:22:23.286704 4873 generic.go:334] "Generic (PLEG): container finished" podID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerID="510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482" exitCode=0 Feb 19 10:22:23 crc kubenswrapper[4873]: I0219 10:22:23.286956 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerDied","Data":"510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482"} Feb 19 10:22:23 crc kubenswrapper[4873]: I0219 10:22:23.286979 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerStarted","Data":"71ec27bcfbbc970191c06674be63b5836ab590c5cabc97f66738bf2c975611ef"} Feb 19 10:22:23 crc kubenswrapper[4873]: I0219 10:22:23.459583 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:23 crc kubenswrapper[4873]: W0219 10:22:23.460172 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687dca98_34c6_47e6_a7cf_89bf448a3426.slice/crio-5809c17a99debc08e53af7053478d25e34b6c3d1eac2f1472718d828db0dea27 WatchSource:0}: Error finding container 5809c17a99debc08e53af7053478d25e34b6c3d1eac2f1472718d828db0dea27: Status 404 returned error can't find the container with id 5809c17a99debc08e53af7053478d25e34b6c3d1eac2f1472718d828db0dea27 Feb 19 10:22:24 crc kubenswrapper[4873]: I0219 10:22:24.296589 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerStarted","Data":"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192"} Feb 19 10:22:24 crc kubenswrapper[4873]: I0219 10:22:24.298219 4873 generic.go:334] "Generic (PLEG): container finished" podID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerID="648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2" exitCode=0 Feb 19 10:22:24 crc kubenswrapper[4873]: I0219 10:22:24.298270 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerDied","Data":"648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2"} Feb 19 10:22:24 crc kubenswrapper[4873]: I0219 10:22:24.298301 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerStarted","Data":"5809c17a99debc08e53af7053478d25e34b6c3d1eac2f1472718d828db0dea27"} Feb 19 10:22:25 crc kubenswrapper[4873]: I0219 10:22:25.308129 4873 generic.go:334] "Generic (PLEG): container finished" podID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerID="94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192" exitCode=0 Feb 19 10:22:25 crc kubenswrapper[4873]: I0219 10:22:25.308383 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerDied","Data":"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192"} Feb 19 10:22:26 crc kubenswrapper[4873]: I0219 10:22:26.326683 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerStarted","Data":"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1"} Feb 19 10:22:26 crc kubenswrapper[4873]: E0219 10:22:26.980698 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687dca98_34c6_47e6_a7cf_89bf448a3426.slice/crio-conmon-372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:22:27 crc kubenswrapper[4873]: I0219 10:22:27.337430 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerStarted","Data":"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20"} Feb 19 10:22:27 crc kubenswrapper[4873]: I0219 10:22:27.340232 4873 generic.go:334] "Generic (PLEG): container finished" podID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerID="372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1" exitCode=0 Feb 19 10:22:27 crc kubenswrapper[4873]: I0219 10:22:27.340268 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerDied","Data":"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1"} Feb 19 10:22:27 crc kubenswrapper[4873]: I0219 10:22:27.362720 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jt52q" podStartSLOduration=2.412323232 podStartE2EDuration="5.362701752s" podCreationTimestamp="2026-02-19 10:22:22 +0000 UTC" firstStartedPulling="2026-02-19 10:22:23.29033891 +0000 UTC m=+2252.579770548" lastFinishedPulling="2026-02-19 10:22:26.24071743 +0000 UTC m=+2255.530149068" observedRunningTime="2026-02-19 10:22:27.355461592 +0000 UTC m=+2256.644893260" watchObservedRunningTime="2026-02-19 10:22:27.362701752 +0000 UTC m=+2256.652133390" Feb 19 10:22:28 crc kubenswrapper[4873]: I0219 10:22:28.351797 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerStarted","Data":"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc"} Feb 19 10:22:30 crc kubenswrapper[4873]: I0219 10:22:30.485146 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:22:30 crc kubenswrapper[4873]: E0219 10:22:30.485819 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:22:32 crc kubenswrapper[4873]: I0219 10:22:32.440271 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:32 crc kubenswrapper[4873]: I0219 10:22:32.440310 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:32 crc kubenswrapper[4873]: I0219 10:22:32.482660 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:32 crc kubenswrapper[4873]: I0219 10:22:32.501652 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j4bwz" podStartSLOduration=7.004408902 podStartE2EDuration="10.501634455s" podCreationTimestamp="2026-02-19 10:22:22 +0000 UTC" firstStartedPulling="2026-02-19 10:22:24.299471047 +0000 UTC m=+2253.588902685" lastFinishedPulling="2026-02-19 10:22:27.79669659 +0000 UTC m=+2257.086128238" observedRunningTime="2026-02-19 10:22:28.374917737 +0000 UTC m=+2257.664349395" watchObservedRunningTime="2026-02-19 10:22:32.501634455 +0000 UTC m=+2261.791066093" Feb 19 10:22:33 crc kubenswrapper[4873]: I0219 10:22:33.000086 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:33 crc kubenswrapper[4873]: I0219 10:22:33.000169 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:33 crc kubenswrapper[4873]: I0219 10:22:33.442940 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:33 crc kubenswrapper[4873]: I0219 10:22:33.722545 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:34 crc kubenswrapper[4873]: I0219 10:22:34.046793 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j4bwz" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="registry-server" probeResult="failure" output=< Feb 19 10:22:34 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:22:34 crc kubenswrapper[4873]: > Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.412864 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jt52q" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="registry-server" containerID="cri-o://5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20" gracePeriod=2 Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.869039 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.957210 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content\") pod \"9566d3f3-4b33-430a-9a8c-a32b3425b487\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.957339 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrvp7\" (UniqueName: \"kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7\") pod \"9566d3f3-4b33-430a-9a8c-a32b3425b487\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.957435 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities\") pod \"9566d3f3-4b33-430a-9a8c-a32b3425b487\" (UID: \"9566d3f3-4b33-430a-9a8c-a32b3425b487\") " Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.958312 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities" (OuterVolumeSpecName: "utilities") pod "9566d3f3-4b33-430a-9a8c-a32b3425b487" (UID: "9566d3f3-4b33-430a-9a8c-a32b3425b487"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.963567 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7" (OuterVolumeSpecName: "kube-api-access-rrvp7") pod "9566d3f3-4b33-430a-9a8c-a32b3425b487" (UID: "9566d3f3-4b33-430a-9a8c-a32b3425b487"). InnerVolumeSpecName "kube-api-access-rrvp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:35 crc kubenswrapper[4873]: I0219 10:22:35.980454 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9566d3f3-4b33-430a-9a8c-a32b3425b487" (UID: "9566d3f3-4b33-430a-9a8c-a32b3425b487"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.059475 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.059515 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrvp7\" (UniqueName: \"kubernetes.io/projected/9566d3f3-4b33-430a-9a8c-a32b3425b487-kube-api-access-rrvp7\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.059547 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566d3f3-4b33-430a-9a8c-a32b3425b487-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.424933 4873 generic.go:334] "Generic (PLEG): container finished" podID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerID="5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20" exitCode=0 Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.425012 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt52q" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.425008 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerDied","Data":"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20"} Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.425699 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt52q" event={"ID":"9566d3f3-4b33-430a-9a8c-a32b3425b487","Type":"ContainerDied","Data":"71ec27bcfbbc970191c06674be63b5836ab590c5cabc97f66738bf2c975611ef"} Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.425747 4873 scope.go:117] "RemoveContainer" containerID="5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.453767 4873 scope.go:117] "RemoveContainer" containerID="94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.468766 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.476769 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt52q"] Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.495942 4873 scope.go:117] "RemoveContainer" containerID="510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.521667 4873 scope.go:117] "RemoveContainer" containerID="5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20" Feb 19 10:22:36 crc kubenswrapper[4873]: E0219 10:22:36.522275 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20\": container with ID starting with 5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20 not found: ID does not exist" containerID="5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.522339 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20"} err="failed to get container status \"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20\": rpc error: code = NotFound desc = could not find container \"5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20\": container with ID starting with 5bbae291b4d7ddf6b641be79b8adb324a29804d4ff6e6a62036b7fe076225f20 not found: ID does not exist" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.522382 4873 scope.go:117] "RemoveContainer" containerID="94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192" Feb 19 10:22:36 crc kubenswrapper[4873]: E0219 10:22:36.522877 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192\": container with ID starting with 94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192 not found: ID does not exist" containerID="94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.522930 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192"} err="failed to get container status \"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192\": rpc error: code = NotFound desc = could not find container \"94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192\": container with ID starting with 94b6f685f6f4cc69951d4449012cc4b5ef34653b8f05ea8a0f34417d235c7192 not found: ID does not exist" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.522967 4873 scope.go:117] "RemoveContainer" containerID="510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482" Feb 19 10:22:36 crc kubenswrapper[4873]: E0219 10:22:36.523241 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482\": container with ID starting with 510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482 not found: ID does not exist" containerID="510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482" Feb 19 10:22:36 crc kubenswrapper[4873]: I0219 10:22:36.523280 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482"} err="failed to get container status \"510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482\": rpc error: code = NotFound desc = could not find container \"510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482\": container with ID starting with 510653dd0fd33a1ceeb0daa0d0a2e54a7b51b7404cbad36dc2f8b57117f6c482 not found: ID does not exist" Feb 19 10:22:37 crc kubenswrapper[4873]: I0219 10:22:37.495651 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" path="/var/lib/kubelet/pods/9566d3f3-4b33-430a-9a8c-a32b3425b487/volumes" Feb 19 10:22:40 crc kubenswrapper[4873]: I0219 10:22:40.479557 4873 generic.go:334] "Generic (PLEG): container finished" podID="2baa296e-fb37-4d90-a7e4-68f61006e085" containerID="150a658f281dc96d724d50ff186b8c8e2240351631746c6a8e775330f61234f8" exitCode=0 Feb 19 10:22:40 crc kubenswrapper[4873]: I0219 10:22:40.479640 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" event={"ID":"2baa296e-fb37-4d90-a7e4-68f61006e085","Type":"ContainerDied","Data":"150a658f281dc96d724d50ff186b8c8e2240351631746c6a8e775330f61234f8"} Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.900883 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.975456 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory\") pod \"2baa296e-fb37-4d90-a7e4-68f61006e085\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.975592 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle\") pod \"2baa296e-fb37-4d90-a7e4-68f61006e085\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.975652 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0\") pod \"2baa296e-fb37-4d90-a7e4-68f61006e085\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.975671 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-667dh\" (UniqueName: \"kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh\") pod \"2baa296e-fb37-4d90-a7e4-68f61006e085\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.976226 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam\") pod \"2baa296e-fb37-4d90-a7e4-68f61006e085\" (UID: \"2baa296e-fb37-4d90-a7e4-68f61006e085\") " Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.980588 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh" (OuterVolumeSpecName: "kube-api-access-667dh") pod "2baa296e-fb37-4d90-a7e4-68f61006e085" (UID: "2baa296e-fb37-4d90-a7e4-68f61006e085"). InnerVolumeSpecName "kube-api-access-667dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:41 crc kubenswrapper[4873]: I0219 10:22:41.981258 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2baa296e-fb37-4d90-a7e4-68f61006e085" (UID: "2baa296e-fb37-4d90-a7e4-68f61006e085"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.002998 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory" (OuterVolumeSpecName: "inventory") pod "2baa296e-fb37-4d90-a7e4-68f61006e085" (UID: "2baa296e-fb37-4d90-a7e4-68f61006e085"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.006274 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2baa296e-fb37-4d90-a7e4-68f61006e085" (UID: "2baa296e-fb37-4d90-a7e4-68f61006e085"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.027110 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2baa296e-fb37-4d90-a7e4-68f61006e085" (UID: "2baa296e-fb37-4d90-a7e4-68f61006e085"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.078894 4873 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.078925 4873 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.078934 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-667dh\" (UniqueName: \"kubernetes.io/projected/2baa296e-fb37-4d90-a7e4-68f61006e085-kube-api-access-667dh\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.078942 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.078951 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2baa296e-fb37-4d90-a7e4-68f61006e085-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.510913 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" event={"ID":"2baa296e-fb37-4d90-a7e4-68f61006e085","Type":"ContainerDied","Data":"35511ceb1e2947d3a6dc7c9578f623a90ab43f627e3e44514885076fab05f57a"} Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.510967 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35511ceb1e2947d3a6dc7c9578f623a90ab43f627e3e44514885076fab05f57a" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.511051 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.633392 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6"] Feb 19 10:22:42 crc kubenswrapper[4873]: E0219 10:22:42.633863 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="extract-content" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.633884 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="extract-content" Feb 19 10:22:42 crc kubenswrapper[4873]: E0219 10:22:42.633901 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2baa296e-fb37-4d90-a7e4-68f61006e085" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.633909 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2baa296e-fb37-4d90-a7e4-68f61006e085" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:42 crc kubenswrapper[4873]: E0219 10:22:42.633937 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="extract-utilities" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.633945 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="extract-utilities" Feb 19 10:22:42 crc kubenswrapper[4873]: E0219 10:22:42.633955 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="registry-server" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.633963 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="registry-server" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.634217 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2baa296e-fb37-4d90-a7e4-68f61006e085" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.634244 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9566d3f3-4b33-430a-9a8c-a32b3425b487" containerName="registry-server" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.634987 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.638897 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.640005 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.640195 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.641093 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.641450 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.642004 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.642194 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.656359 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6"] Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694291 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmvvj\" (UniqueName: \"kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694454 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694483 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694532 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694602 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694633 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694679 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694754 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694778 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.694801 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796626 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796684 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796831 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796860 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796930 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.796980 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.797053 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.797140 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.797179 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.797224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.797291 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmvvj\" (UniqueName: \"kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.798872 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.799947 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.800191 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.803590 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.804796 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.805595 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.805623 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.805983 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.809968 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.819552 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmvvj\" (UniqueName: \"kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.826484 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v25t6\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:42 crc kubenswrapper[4873]: I0219 10:22:42.973534 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:22:43 crc kubenswrapper[4873]: I0219 10:22:43.046658 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:43 crc kubenswrapper[4873]: I0219 10:22:43.112681 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:43 crc kubenswrapper[4873]: W0219 10:22:43.535208 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce5f426d_554a_469a_be1e_e3e1b9bfa68e.slice/crio-98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7 WatchSource:0}: Error finding container 98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7: Status 404 returned error can't find the container with id 98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7 Feb 19 10:22:43 crc kubenswrapper[4873]: I0219 10:22:43.535777 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6"] Feb 19 10:22:44 crc kubenswrapper[4873]: I0219 10:22:44.531273 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" event={"ID":"ce5f426d-554a-469a-be1e-e3e1b9bfa68e","Type":"ContainerStarted","Data":"7c7950ee14ffefcf75376610caefc5f50e26f12cde49f2346a88b479e29c5643"} Feb 19 10:22:44 crc kubenswrapper[4873]: I0219 10:22:44.531610 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" event={"ID":"ce5f426d-554a-469a-be1e-e3e1b9bfa68e","Type":"ContainerStarted","Data":"98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7"} Feb 19 10:22:44 crc kubenswrapper[4873]: I0219 10:22:44.553518 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" podStartSLOduration=2.111810755 podStartE2EDuration="2.553496848s" podCreationTimestamp="2026-02-19 10:22:42 +0000 UTC" firstStartedPulling="2026-02-19 10:22:43.538172475 +0000 UTC m=+2272.827604123" lastFinishedPulling="2026-02-19 10:22:43.979858568 +0000 UTC m=+2273.269290216" observedRunningTime="2026-02-19 10:22:44.548678688 +0000 UTC m=+2273.838110346" watchObservedRunningTime="2026-02-19 10:22:44.553496848 +0000 UTC m=+2273.842928486" Feb 19 10:22:45 crc kubenswrapper[4873]: I0219 10:22:45.485317 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:22:45 crc kubenswrapper[4873]: E0219 10:22:45.486066 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:22:45 crc kubenswrapper[4873]: I0219 10:22:45.981298 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:45 crc kubenswrapper[4873]: I0219 10:22:45.981509 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j4bwz" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="registry-server" containerID="cri-o://45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc" gracePeriod=2 Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.444990 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.469002 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content\") pod \"687dca98-34c6-47e6-a7cf-89bf448a3426\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.469165 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities\") pod \"687dca98-34c6-47e6-a7cf-89bf448a3426\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.469221 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4fg2\" (UniqueName: \"kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2\") pod \"687dca98-34c6-47e6-a7cf-89bf448a3426\" (UID: \"687dca98-34c6-47e6-a7cf-89bf448a3426\") " Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.470128 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities" (OuterVolumeSpecName: "utilities") pod "687dca98-34c6-47e6-a7cf-89bf448a3426" (UID: "687dca98-34c6-47e6-a7cf-89bf448a3426"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.485337 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2" (OuterVolumeSpecName: "kube-api-access-w4fg2") pod "687dca98-34c6-47e6-a7cf-89bf448a3426" (UID: "687dca98-34c6-47e6-a7cf-89bf448a3426"). InnerVolumeSpecName "kube-api-access-w4fg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.548982 4873 generic.go:334] "Generic (PLEG): container finished" podID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerID="45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc" exitCode=0 Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.549031 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerDied","Data":"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc"} Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.549050 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4bwz" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.549058 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4bwz" event={"ID":"687dca98-34c6-47e6-a7cf-89bf448a3426","Type":"ContainerDied","Data":"5809c17a99debc08e53af7053478d25e34b6c3d1eac2f1472718d828db0dea27"} Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.549080 4873 scope.go:117] "RemoveContainer" containerID="45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.579475 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.579603 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4fg2\" (UniqueName: \"kubernetes.io/projected/687dca98-34c6-47e6-a7cf-89bf448a3426-kube-api-access-w4fg2\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.588424 4873 scope.go:117] "RemoveContainer" containerID="372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.607453 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "687dca98-34c6-47e6-a7cf-89bf448a3426" (UID: "687dca98-34c6-47e6-a7cf-89bf448a3426"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.614343 4873 scope.go:117] "RemoveContainer" containerID="648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.677176 4873 scope.go:117] "RemoveContainer" containerID="45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc" Feb 19 10:22:46 crc kubenswrapper[4873]: E0219 10:22:46.677771 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc\": container with ID starting with 45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc not found: ID does not exist" containerID="45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.677811 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc"} err="failed to get container status \"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc\": rpc error: code = NotFound desc = could not find container \"45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc\": container with ID starting with 45d62d795db1c823d67e6e5582b82865d36d5ea8aab87289e80ac1ab64eab8cc not found: ID does not exist" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.677838 4873 scope.go:117] "RemoveContainer" containerID="372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1" Feb 19 10:22:46 crc kubenswrapper[4873]: E0219 10:22:46.678193 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1\": container with ID starting with 372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1 not found: ID does not exist" containerID="372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.678221 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1"} err="failed to get container status \"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1\": rpc error: code = NotFound desc = could not find container \"372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1\": container with ID starting with 372cfba496aa8daf99f042c076f07838550f2060d0c2498dd29a0f4a4051bad1 not found: ID does not exist" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.678239 4873 scope.go:117] "RemoveContainer" containerID="648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2" Feb 19 10:22:46 crc kubenswrapper[4873]: E0219 10:22:46.678559 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2\": container with ID starting with 648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2 not found: ID does not exist" containerID="648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.678598 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2"} err="failed to get container status \"648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2\": rpc error: code = NotFound desc = could not find container \"648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2\": container with ID starting with 648394c940eca417606a1205ea47146261640581b2a250bad362a4285f73a1b2 not found: ID does not exist" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.680944 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687dca98-34c6-47e6-a7cf-89bf448a3426-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.887884 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:46 crc kubenswrapper[4873]: I0219 10:22:46.935004 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j4bwz"] Feb 19 10:22:47 crc kubenswrapper[4873]: I0219 10:22:47.505186 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" path="/var/lib/kubelet/pods/687dca98-34c6-47e6-a7cf-89bf448a3426/volumes" Feb 19 10:22:58 crc kubenswrapper[4873]: I0219 10:22:58.484558 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:22:58 crc kubenswrapper[4873]: E0219 10:22:58.485461 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:23:10 crc kubenswrapper[4873]: I0219 10:23:10.484525 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:23:10 crc kubenswrapper[4873]: E0219 10:23:10.485260 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:23:25 crc kubenswrapper[4873]: I0219 10:23:25.486063 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:23:25 crc kubenswrapper[4873]: E0219 10:23:25.487040 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:23:38 crc kubenswrapper[4873]: I0219 10:23:38.484805 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:23:38 crc kubenswrapper[4873]: E0219 10:23:38.485493 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:23:52 crc kubenswrapper[4873]: I0219 10:23:52.485453 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:23:52 crc kubenswrapper[4873]: E0219 10:23:52.486175 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:24:04 crc kubenswrapper[4873]: I0219 10:24:04.484684 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:24:04 crc kubenswrapper[4873]: E0219 10:24:04.485487 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:24:16 crc kubenswrapper[4873]: I0219 10:24:16.484236 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:24:16 crc kubenswrapper[4873]: E0219 10:24:16.484935 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:24:27 crc kubenswrapper[4873]: I0219 10:24:27.484490 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:24:27 crc kubenswrapper[4873]: E0219 10:24:27.485408 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:24:40 crc kubenswrapper[4873]: I0219 10:24:40.484187 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:24:40 crc kubenswrapper[4873]: E0219 10:24:40.485167 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:24:54 crc kubenswrapper[4873]: I0219 10:24:54.484062 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:24:54 crc kubenswrapper[4873]: E0219 10:24:54.485503 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:25:09 crc kubenswrapper[4873]: I0219 10:25:09.484338 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:25:09 crc kubenswrapper[4873]: E0219 10:25:09.485300 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:25:13 crc kubenswrapper[4873]: I0219 10:25:13.258524 4873 generic.go:334] "Generic (PLEG): container finished" podID="ce5f426d-554a-469a-be1e-e3e1b9bfa68e" containerID="7c7950ee14ffefcf75376610caefc5f50e26f12cde49f2346a88b479e29c5643" exitCode=0 Feb 19 10:25:13 crc kubenswrapper[4873]: I0219 10:25:13.258632 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" event={"ID":"ce5f426d-554a-469a-be1e-e3e1b9bfa68e","Type":"ContainerDied","Data":"7c7950ee14ffefcf75376610caefc5f50e26f12cde49f2346a88b479e29c5643"} Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.688296 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745574 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745701 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745720 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745796 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745852 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745873 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745894 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmvvj\" (UniqueName: \"kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745924 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745945 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745971 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.745988 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle\") pod \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\" (UID: \"ce5f426d-554a-469a-be1e-e3e1b9bfa68e\") " Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.758565 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj" (OuterVolumeSpecName: "kube-api-access-wmvvj") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "kube-api-access-wmvvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.767636 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.771983 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.778863 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.779490 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.780735 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.781914 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.785709 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.785959 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.804197 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.804243 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory" (OuterVolumeSpecName: "inventory") pod "ce5f426d-554a-469a-be1e-e3e1b9bfa68e" (UID: "ce5f426d-554a-469a-be1e-e3e1b9bfa68e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848549 4873 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848591 4873 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848605 4873 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848613 4873 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848626 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848635 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmvvj\" (UniqueName: \"kubernetes.io/projected/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-kube-api-access-wmvvj\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848643 4873 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848651 4873 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848660 4873 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848669 4873 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4873]: I0219 10:25:14.848679 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce5f426d-554a-469a-be1e-e3e1b9bfa68e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.279557 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" event={"ID":"ce5f426d-554a-469a-be1e-e3e1b9bfa68e","Type":"ContainerDied","Data":"98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7"} Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.279602 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cf7115a91b048cf40f5fa0771846df571cc77d9386148ce081fbf2082750b7" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.279609 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v25t6" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.389132 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz"] Feb 19 10:25:15 crc kubenswrapper[4873]: E0219 10:25:15.390034 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="extract-utilities" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.390237 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="extract-utilities" Feb 19 10:25:15 crc kubenswrapper[4873]: E0219 10:25:15.390457 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="registry-server" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.390601 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="registry-server" Feb 19 10:25:15 crc kubenswrapper[4873]: E0219 10:25:15.390749 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5f426d-554a-469a-be1e-e3e1b9bfa68e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.390879 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5f426d-554a-469a-be1e-e3e1b9bfa68e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:25:15 crc kubenswrapper[4873]: E0219 10:25:15.391040 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="extract-content" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.391221 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="extract-content" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.391778 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="687dca98-34c6-47e6-a7cf-89bf448a3426" containerName="registry-server" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.391951 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5f426d-554a-469a-be1e-e3e1b9bfa68e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.393342 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.396712 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.397762 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.397772 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.401803 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5l9s5" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.404343 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.419665 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz"] Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459236 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459298 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459353 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459384 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459413 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459499 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.459556 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99k8b\" (UniqueName: \"kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.562761 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.564733 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99k8b\" (UniqueName: \"kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.565076 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.565248 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.565431 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.565552 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.565667 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.575617 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.579092 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.581924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.585819 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.587703 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.588757 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.596373 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99k8b\" (UniqueName: \"kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:15 crc kubenswrapper[4873]: I0219 10:25:15.751793 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:25:16 crc kubenswrapper[4873]: I0219 10:25:16.285495 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz"] Feb 19 10:25:16 crc kubenswrapper[4873]: W0219 10:25:16.290757 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf143721_2963_4009_8e23_0c283b4a88a3.slice/crio-ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac WatchSource:0}: Error finding container ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac: Status 404 returned error can't find the container with id ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac Feb 19 10:25:17 crc kubenswrapper[4873]: I0219 10:25:17.302179 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" event={"ID":"bf143721-2963-4009-8e23-0c283b4a88a3","Type":"ContainerStarted","Data":"b868117e1afa026d3ebaa8096a793cced60c973ae3cbedf3ed41777195b019e2"} Feb 19 10:25:17 crc kubenswrapper[4873]: I0219 10:25:17.302493 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" event={"ID":"bf143721-2963-4009-8e23-0c283b4a88a3","Type":"ContainerStarted","Data":"ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac"} Feb 19 10:25:17 crc kubenswrapper[4873]: I0219 10:25:17.322666 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" podStartSLOduration=1.9381076849999999 podStartE2EDuration="2.322641456s" podCreationTimestamp="2026-02-19 10:25:15 +0000 UTC" firstStartedPulling="2026-02-19 10:25:16.295370413 +0000 UTC m=+2425.584802051" lastFinishedPulling="2026-02-19 10:25:16.679904194 +0000 UTC m=+2425.969335822" observedRunningTime="2026-02-19 10:25:17.316995654 +0000 UTC m=+2426.606427302" watchObservedRunningTime="2026-02-19 10:25:17.322641456 +0000 UTC m=+2426.612073104" Feb 19 10:25:24 crc kubenswrapper[4873]: I0219 10:25:24.502330 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:25:24 crc kubenswrapper[4873]: E0219 10:25:24.503221 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:25:37 crc kubenswrapper[4873]: I0219 10:25:37.484790 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:25:37 crc kubenswrapper[4873]: E0219 10:25:37.485620 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:25:48 crc kubenswrapper[4873]: I0219 10:25:48.484532 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:25:48 crc kubenswrapper[4873]: E0219 10:25:48.485338 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:26:01 crc kubenswrapper[4873]: I0219 10:26:01.492822 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:26:01 crc kubenswrapper[4873]: E0219 10:26:01.493698 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.247584 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bdcwz"] Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.250754 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.264503 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdcwz"] Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.446453 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-utilities\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.446514 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-catalog-content\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.446586 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmp7\" (UniqueName: \"kubernetes.io/projected/d27fce7f-0ae7-4e22-885f-ad2a398647cc-kube-api-access-vrmp7\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.549643 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-utilities\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.550063 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-catalog-content\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.550223 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmp7\" (UniqueName: \"kubernetes.io/projected/d27fce7f-0ae7-4e22-885f-ad2a398647cc-kube-api-access-vrmp7\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.550139 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-utilities\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.550500 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d27fce7f-0ae7-4e22-885f-ad2a398647cc-catalog-content\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.571289 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmp7\" (UniqueName: \"kubernetes.io/projected/d27fce7f-0ae7-4e22-885f-ad2a398647cc-kube-api-access-vrmp7\") pod \"certified-operators-bdcwz\" (UID: \"d27fce7f-0ae7-4e22-885f-ad2a398647cc\") " pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:09 crc kubenswrapper[4873]: I0219 10:26:09.870287 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:10 crc kubenswrapper[4873]: I0219 10:26:10.347262 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdcwz"] Feb 19 10:26:10 crc kubenswrapper[4873]: I0219 10:26:10.844839 4873 generic.go:334] "Generic (PLEG): container finished" podID="d27fce7f-0ae7-4e22-885f-ad2a398647cc" containerID="f9af54d31ed7feab3f88e8c48aeaa9ce492f22930075ffd283521574bafef2d8" exitCode=0 Feb 19 10:26:10 crc kubenswrapper[4873]: I0219 10:26:10.844880 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdcwz" event={"ID":"d27fce7f-0ae7-4e22-885f-ad2a398647cc","Type":"ContainerDied","Data":"f9af54d31ed7feab3f88e8c48aeaa9ce492f22930075ffd283521574bafef2d8"} Feb 19 10:26:10 crc kubenswrapper[4873]: I0219 10:26:10.844915 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdcwz" event={"ID":"d27fce7f-0ae7-4e22-885f-ad2a398647cc","Type":"ContainerStarted","Data":"aad2e31bce9aa3c21d2169431db6a2f9414a0e6b52e4e906fafee120cecb785e"} Feb 19 10:26:10 crc kubenswrapper[4873]: I0219 10:26:10.846970 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:26:14 crc kubenswrapper[4873]: I0219 10:26:14.484546 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:26:14 crc kubenswrapper[4873]: E0219 10:26:14.485548 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:26:15 crc kubenswrapper[4873]: I0219 10:26:15.885881 4873 generic.go:334] "Generic (PLEG): container finished" podID="d27fce7f-0ae7-4e22-885f-ad2a398647cc" containerID="a119938961505d7dd6e47331558f948cbf13d25c3b4a7ac6dc165923f892fccf" exitCode=0 Feb 19 10:26:15 crc kubenswrapper[4873]: I0219 10:26:15.885952 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdcwz" event={"ID":"d27fce7f-0ae7-4e22-885f-ad2a398647cc","Type":"ContainerDied","Data":"a119938961505d7dd6e47331558f948cbf13d25c3b4a7ac6dc165923f892fccf"} Feb 19 10:26:16 crc kubenswrapper[4873]: I0219 10:26:16.900199 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdcwz" event={"ID":"d27fce7f-0ae7-4e22-885f-ad2a398647cc","Type":"ContainerStarted","Data":"9f573e0fb65ef301f9ba3f550f7ba48b9318be65474d4434ed019f4cf2aab52e"} Feb 19 10:26:16 crc kubenswrapper[4873]: I0219 10:26:16.921764 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bdcwz" podStartSLOduration=2.224853929 podStartE2EDuration="7.921745924s" podCreationTimestamp="2026-02-19 10:26:09 +0000 UTC" firstStartedPulling="2026-02-19 10:26:10.846721188 +0000 UTC m=+2480.136152816" lastFinishedPulling="2026-02-19 10:26:16.543613173 +0000 UTC m=+2485.833044811" observedRunningTime="2026-02-19 10:26:16.918610106 +0000 UTC m=+2486.208041754" watchObservedRunningTime="2026-02-19 10:26:16.921745924 +0000 UTC m=+2486.211177562" Feb 19 10:26:19 crc kubenswrapper[4873]: I0219 10:26:19.871123 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:19 crc kubenswrapper[4873]: I0219 10:26:19.871926 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:19 crc kubenswrapper[4873]: I0219 10:26:19.933513 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:28 crc kubenswrapper[4873]: I0219 10:26:28.484303 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:26:28 crc kubenswrapper[4873]: E0219 10:26:28.484861 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:26:29 crc kubenswrapper[4873]: I0219 10:26:29.923660 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bdcwz" Feb 19 10:26:29 crc kubenswrapper[4873]: I0219 10:26:29.992245 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdcwz"] Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.036023 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.036280 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c2d4s" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="registry-server" containerID="cri-o://9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a" gracePeriod=2 Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.546653 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.665687 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8j57\" (UniqueName: \"kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57\") pod \"92377803-fb7e-42d1-ba93-54235a8f9409\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.666061 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities\") pod \"92377803-fb7e-42d1-ba93-54235a8f9409\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.666123 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content\") pod \"92377803-fb7e-42d1-ba93-54235a8f9409\" (UID: \"92377803-fb7e-42d1-ba93-54235a8f9409\") " Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.668315 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities" (OuterVolumeSpecName: "utilities") pod "92377803-fb7e-42d1-ba93-54235a8f9409" (UID: "92377803-fb7e-42d1-ba93-54235a8f9409"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.678302 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57" (OuterVolumeSpecName: "kube-api-access-g8j57") pod "92377803-fb7e-42d1-ba93-54235a8f9409" (UID: "92377803-fb7e-42d1-ba93-54235a8f9409"). InnerVolumeSpecName "kube-api-access-g8j57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.768492 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8j57\" (UniqueName: \"kubernetes.io/projected/92377803-fb7e-42d1-ba93-54235a8f9409-kube-api-access-g8j57\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.768524 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.774406 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92377803-fb7e-42d1-ba93-54235a8f9409" (UID: "92377803-fb7e-42d1-ba93-54235a8f9409"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:26:30 crc kubenswrapper[4873]: I0219 10:26:30.870048 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92377803-fb7e-42d1-ba93-54235a8f9409-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.034924 4873 generic.go:334] "Generic (PLEG): container finished" podID="92377803-fb7e-42d1-ba93-54235a8f9409" containerID="9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a" exitCode=0 Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.034968 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerDied","Data":"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a"} Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.034995 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2d4s" event={"ID":"92377803-fb7e-42d1-ba93-54235a8f9409","Type":"ContainerDied","Data":"a08dd4d7e39597a08d46eee691df8d3e8119bb68e610a407ed93ede91eb7581e"} Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.035015 4873 scope.go:117] "RemoveContainer" containerID="9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.035195 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2d4s" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.067745 4873 scope.go:117] "RemoveContainer" containerID="5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.080401 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.101720 4873 scope.go:117] "RemoveContainer" containerID="09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.103497 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c2d4s"] Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.153553 4873 scope.go:117] "RemoveContainer" containerID="9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a" Feb 19 10:26:31 crc kubenswrapper[4873]: E0219 10:26:31.155443 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a\": container with ID starting with 9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a not found: ID does not exist" containerID="9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.155590 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a"} err="failed to get container status \"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a\": rpc error: code = NotFound desc = could not find container \"9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a\": container with ID starting with 9dcea95636472a4fa99dca5f5648382f340d5a35696bba2dd95d5d4baa05cb5a not found: ID does not exist" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.155673 4873 scope.go:117] "RemoveContainer" containerID="5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2" Feb 19 10:26:31 crc kubenswrapper[4873]: E0219 10:26:31.156136 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2\": container with ID starting with 5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2 not found: ID does not exist" containerID="5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.156177 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2"} err="failed to get container status \"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2\": rpc error: code = NotFound desc = could not find container \"5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2\": container with ID starting with 5485a42ca3dc7f78b9a621c9cc2138b955a8e9f0ec19cb28fe151e0c8ae3a5b2 not found: ID does not exist" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.156203 4873 scope.go:117] "RemoveContainer" containerID="09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593" Feb 19 10:26:31 crc kubenswrapper[4873]: E0219 10:26:31.156526 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593\": container with ID starting with 09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593 not found: ID does not exist" containerID="09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.156624 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593"} err="failed to get container status \"09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593\": rpc error: code = NotFound desc = could not find container \"09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593\": container with ID starting with 09d79d6a44f7ab2bd840bebd67e2c4ff2d2bdef097d6feda8d881b4512fee593 not found: ID does not exist" Feb 19 10:26:31 crc kubenswrapper[4873]: I0219 10:26:31.497233 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" path="/var/lib/kubelet/pods/92377803-fb7e-42d1-ba93-54235a8f9409/volumes" Feb 19 10:26:43 crc kubenswrapper[4873]: I0219 10:26:43.487311 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:26:43 crc kubenswrapper[4873]: E0219 10:26:43.488475 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:26:55 crc kubenswrapper[4873]: I0219 10:26:55.484239 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:26:56 crc kubenswrapper[4873]: I0219 10:26:56.302243 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20"} Feb 19 10:27:19 crc kubenswrapper[4873]: I0219 10:27:19.517338 4873 generic.go:334] "Generic (PLEG): container finished" podID="bf143721-2963-4009-8e23-0c283b4a88a3" containerID="b868117e1afa026d3ebaa8096a793cced60c973ae3cbedf3ed41777195b019e2" exitCode=0 Feb 19 10:27:19 crc kubenswrapper[4873]: I0219 10:27:19.517426 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" event={"ID":"bf143721-2963-4009-8e23-0c283b4a88a3","Type":"ContainerDied","Data":"b868117e1afa026d3ebaa8096a793cced60c973ae3cbedf3ed41777195b019e2"} Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.973181 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.976812 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.976895 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.976935 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.977035 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.977076 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.977184 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99k8b\" (UniqueName: \"kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.977288 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1\") pod \"bf143721-2963-4009-8e23-0c283b4a88a3\" (UID: \"bf143721-2963-4009-8e23-0c283b4a88a3\") " Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.984270 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b" (OuterVolumeSpecName: "kube-api-access-99k8b") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "kube-api-access-99k8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:20 crc kubenswrapper[4873]: I0219 10:27:20.984352 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.019907 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.023432 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory" (OuterVolumeSpecName: "inventory") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.027325 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.032713 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.033276 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf143721-2963-4009-8e23-0c283b4a88a3" (UID: "bf143721-2963-4009-8e23-0c283b4a88a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079164 4873 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079282 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079454 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99k8b\" (UniqueName: \"kubernetes.io/projected/bf143721-2963-4009-8e23-0c283b4a88a3-kube-api-access-99k8b\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079516 4873 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079591 4873 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079648 4873 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.079728 4873 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf143721-2963-4009-8e23-0c283b4a88a3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.538073 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" event={"ID":"bf143721-2963-4009-8e23-0c283b4a88a3","Type":"ContainerDied","Data":"ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac"} Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.538126 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7e10e73d578ccea86cceec7a80c2f08166bb6d93976658c29e6c98ef292cac" Feb 19 10:27:21 crc kubenswrapper[4873]: I0219 10:27:21.538161 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.880315 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 19 10:27:53 crc kubenswrapper[4873]: E0219 10:27:53.881416 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf143721-2963-4009-8e23-0c283b4a88a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881439 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf143721-2963-4009-8e23-0c283b4a88a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:27:53 crc kubenswrapper[4873]: E0219 10:27:53.881469 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="extract-utilities" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881478 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="extract-utilities" Feb 19 10:27:53 crc kubenswrapper[4873]: E0219 10:27:53.881509 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="registry-server" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881517 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="registry-server" Feb 19 10:27:53 crc kubenswrapper[4873]: E0219 10:27:53.881547 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="extract-content" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881555 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="extract-content" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881812 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf143721-2963-4009-8e23-0c283b4a88a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.881836 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="92377803-fb7e-42d1-ba93-54235a8f9409" containerName="registry-server" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.887508 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.889957 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.899294 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.960001 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.961931 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.964951 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.984132 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997839 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997893 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-scripts\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997925 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997950 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997972 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-sys\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.997986 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998005 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-lib-modules\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998024 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998044 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998064 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-dev\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998133 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-run\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998149 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998161 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-run\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998190 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-dev\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998209 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998229 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998247 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998270 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-sys\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998287 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998312 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998340 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998355 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998378 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998392 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998419 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72k9m\" (UniqueName: \"kubernetes.io/projected/312e766d-4086-4bab-bf8f-9a154f1da5b5-kube-api-access-72k9m\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998435 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998456 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998471 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hxjg\" (UniqueName: \"kubernetes.io/projected/717b3122-e7c6-4cbe-8528-4b582dd7adc5-kube-api-access-4hxjg\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998485 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:53 crc kubenswrapper[4873]: I0219 10:27:53.998508 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.035618 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.037357 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.039485 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.050629 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100502 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100581 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgb6v\" (UniqueName: \"kubernetes.io/projected/8268173a-e7be-4edd-a1e8-bed3486b138e-kube-api-access-jgb6v\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100664 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100702 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100751 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100776 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100812 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100822 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72k9m\" (UniqueName: \"kubernetes.io/projected/312e766d-4086-4bab-bf8f-9a154f1da5b5-kube-api-access-72k9m\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100847 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100875 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100928 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100958 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hxjg\" (UniqueName: \"kubernetes.io/projected/717b3122-e7c6-4cbe-8528-4b582dd7adc5-kube-api-access-4hxjg\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100975 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.100994 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101052 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101072 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101087 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101145 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101201 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-scripts\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101243 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101272 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101294 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101325 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101356 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-sys\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101375 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101394 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101426 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-lib-modules\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101446 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101471 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101485 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101512 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101543 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-dev\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101589 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-run\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101600 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101605 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101648 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-run\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101719 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101727 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101758 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-run\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101778 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-dev\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101795 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-sys\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101797 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101843 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101878 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101914 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.101939 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102015 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-sys\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102058 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102147 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102175 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102274 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102389 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102762 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-run\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102830 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-sys\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.102936 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-dev\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103043 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103168 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103278 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103409 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103533 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103638 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-lib-modules\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103822 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-dev\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.103986 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/312e766d-4086-4bab-bf8f-9a154f1da5b5-etc-nvme\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.104046 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/717b3122-e7c6-4cbe-8528-4b582dd7adc5-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.107961 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data-custom\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.108379 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.110912 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.111935 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-config-data\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.113323 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312e766d-4086-4bab-bf8f-9a154f1da5b5-scripts\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.113593 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.114034 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.118903 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/717b3122-e7c6-4cbe-8528-4b582dd7adc5-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.119651 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72k9m\" (UniqueName: \"kubernetes.io/projected/312e766d-4086-4bab-bf8f-9a154f1da5b5-kube-api-access-72k9m\") pod \"cinder-backup-0\" (UID: \"312e766d-4086-4bab-bf8f-9a154f1da5b5\") " pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.119756 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hxjg\" (UniqueName: \"kubernetes.io/projected/717b3122-e7c6-4cbe-8528-4b582dd7adc5-kube-api-access-4hxjg\") pod \"cinder-volume-nfs-0\" (UID: \"717b3122-e7c6-4cbe-8528-4b582dd7adc5\") " pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205463 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205521 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205536 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205581 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205604 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205630 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205665 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205709 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205732 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205751 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205787 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205810 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205876 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgb6v\" (UniqueName: \"kubernetes.io/projected/8268173a-e7be-4edd-a1e8-bed3486b138e-kube-api-access-jgb6v\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205895 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.205984 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206038 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206090 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206579 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206613 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206614 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206634 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206695 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.206924 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.207006 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8268173a-e7be-4edd-a1e8-bed3486b138e-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.213222 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.213901 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.213963 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.214574 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8268173a-e7be-4edd-a1e8-bed3486b138e-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.228300 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgb6v\" (UniqueName: \"kubernetes.io/projected/8268173a-e7be-4edd-a1e8-bed3486b138e-kube-api-access-jgb6v\") pod \"cinder-volume-nfs-2-0\" (UID: \"8268173a-e7be-4edd-a1e8-bed3486b138e\") " pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.241278 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.277899 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.357989 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.897463 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 10:27:54 crc kubenswrapper[4873]: I0219 10:27:54.910679 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"312e766d-4086-4bab-bf8f-9a154f1da5b5","Type":"ContainerStarted","Data":"961f4c4eddf24a0513e2fe9b761a0202865e1cc843dc3d1c04366638dd0088ae"} Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.019344 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.102917 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 19 10:27:55 crc kubenswrapper[4873]: W0219 10:27:55.194253 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8268173a_e7be_4edd_a1e8_bed3486b138e.slice/crio-c03e5045e9e471dde8e7d32d98b14fbc330b1ed6fba2f3d47609cc01cbd1b15a WatchSource:0}: Error finding container c03e5045e9e471dde8e7d32d98b14fbc330b1ed6fba2f3d47609cc01cbd1b15a: Status 404 returned error can't find the container with id c03e5045e9e471dde8e7d32d98b14fbc330b1ed6fba2f3d47609cc01cbd1b15a Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.923955 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"312e766d-4086-4bab-bf8f-9a154f1da5b5","Type":"ContainerStarted","Data":"ecbd2535a2e827e730f4f5e040504890874fa719312f4961e2a232ae0dd6038a"} Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.926301 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"717b3122-e7c6-4cbe-8528-4b582dd7adc5","Type":"ContainerStarted","Data":"c3f784c8d49cdf2e2dcbbe10ee20f040fc0f6e1ec216e5a6ed15a35dec145ff5"} Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.926345 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"717b3122-e7c6-4cbe-8528-4b582dd7adc5","Type":"ContainerStarted","Data":"355767f3ecdcb659b330cd062c004359cba04a7d72d50415f1ac2f4cadee3afe"} Feb 19 10:27:55 crc kubenswrapper[4873]: I0219 10:27:55.927165 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8268173a-e7be-4edd-a1e8-bed3486b138e","Type":"ContainerStarted","Data":"c03e5045e9e471dde8e7d32d98b14fbc330b1ed6fba2f3d47609cc01cbd1b15a"} Feb 19 10:27:56 crc kubenswrapper[4873]: I0219 10:27:56.953853 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"312e766d-4086-4bab-bf8f-9a154f1da5b5","Type":"ContainerStarted","Data":"c63855b80845073b98d0858b03ab13806fae9cae4d9a0b9fc97839acc005f5d7"} Feb 19 10:27:56 crc kubenswrapper[4873]: I0219 10:27:56.969400 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"717b3122-e7c6-4cbe-8528-4b582dd7adc5","Type":"ContainerStarted","Data":"4fbd9a9615b0818985931b63b4655fecaed536d5212f0144603bba625756797f"} Feb 19 10:27:56 crc kubenswrapper[4873]: I0219 10:27:56.972639 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8268173a-e7be-4edd-a1e8-bed3486b138e","Type":"ContainerStarted","Data":"e0a9ab4d817ce1eeddae293a19f306829951931358e2b315242de292c3e25eba"} Feb 19 10:27:56 crc kubenswrapper[4873]: I0219 10:27:56.972678 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8268173a-e7be-4edd-a1e8-bed3486b138e","Type":"ContainerStarted","Data":"e99a7348a838c4370c5513b26f7f2df0eae85f119ca65f1ebddf578dd3416115"} Feb 19 10:27:56 crc kubenswrapper[4873]: I0219 10:27:56.985900 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.576394156 podStartE2EDuration="3.985882955s" podCreationTimestamp="2026-02-19 10:27:53 +0000 UTC" firstStartedPulling="2026-02-19 10:27:54.905867178 +0000 UTC m=+2584.195298816" lastFinishedPulling="2026-02-19 10:27:55.315355977 +0000 UTC m=+2584.604787615" observedRunningTime="2026-02-19 10:27:56.981757132 +0000 UTC m=+2586.271188780" watchObservedRunningTime="2026-02-19 10:27:56.985882955 +0000 UTC m=+2586.275314593" Feb 19 10:27:57 crc kubenswrapper[4873]: I0219 10:27:57.042859 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.588840036 podStartE2EDuration="3.042839057s" podCreationTimestamp="2026-02-19 10:27:54 +0000 UTC" firstStartedPulling="2026-02-19 10:27:55.234565119 +0000 UTC m=+2584.523996757" lastFinishedPulling="2026-02-19 10:27:55.68856414 +0000 UTC m=+2584.977995778" observedRunningTime="2026-02-19 10:27:57.032613412 +0000 UTC m=+2586.322045050" watchObservedRunningTime="2026-02-19 10:27:57.042839057 +0000 UTC m=+2586.332270695" Feb 19 10:27:57 crc kubenswrapper[4873]: I0219 10:27:57.045446 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.540261974 podStartE2EDuration="4.045435502s" podCreationTimestamp="2026-02-19 10:27:53 +0000 UTC" firstStartedPulling="2026-02-19 10:27:55.178973331 +0000 UTC m=+2584.468404969" lastFinishedPulling="2026-02-19 10:27:55.684146859 +0000 UTC m=+2584.973578497" observedRunningTime="2026-02-19 10:27:57.00890498 +0000 UTC m=+2586.298336618" watchObservedRunningTime="2026-02-19 10:27:57.045435502 +0000 UTC m=+2586.334867140" Feb 19 10:27:59 crc kubenswrapper[4873]: I0219 10:27:59.241653 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 19 10:27:59 crc kubenswrapper[4873]: I0219 10:27:59.278700 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 19 10:27:59 crc kubenswrapper[4873]: I0219 10:27:59.358913 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:28:04 crc kubenswrapper[4873]: I0219 10:28:04.490717 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 19 10:28:04 crc kubenswrapper[4873]: I0219 10:28:04.571079 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 19 10:28:04 crc kubenswrapper[4873]: I0219 10:28:04.620661 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.331848 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.332753 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="prometheus" containerID="cri-o://527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f" gracePeriod=600 Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.333406 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="thanos-sidecar" containerID="cri-o://e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573" gracePeriod=600 Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.333466 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="config-reloader" containerID="cri-o://92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980" gracePeriod=600 Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.607123 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerID="e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573" exitCode=0 Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.607628 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerID="527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f" exitCode=0 Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.607201 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerDied","Data":"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573"} Feb 19 10:29:03 crc kubenswrapper[4873]: I0219 10:29:03.607832 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerDied","Data":"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f"} Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.381213 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471715 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471819 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471850 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471898 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snd6v\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471944 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471970 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.471996 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472261 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472350 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472376 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472438 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472493 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.472519 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out\") pod \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\" (UID: \"d1070e0c-7518-4d1b-bbb8-e56db1cad28a\") " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.476082 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.484020 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.485019 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out" (OuterVolumeSpecName: "config-out") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.485154 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.493382 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.496088 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v" (OuterVolumeSpecName: "kube-api-access-snd6v") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "kube-api-access-snd6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.497498 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config" (OuterVolumeSpecName: "config") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.497600 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.508222 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.509299 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.536306 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.636086 4873 generic.go:334] "Generic (PLEG): container finished" podID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerID="92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980" exitCode=0 Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.636157 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerDied","Data":"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980"} Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.636192 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d1070e0c-7518-4d1b-bbb8-e56db1cad28a","Type":"ContainerDied","Data":"67a74c25d6b44ccd6cb397b300a6cd2025bf7fa88890d389ef81197cfb4ef22d"} Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.636228 4873 scope.go:117] "RemoveContainer" containerID="e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.636459 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.637407 4873 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640716 4873 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640802 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640817 4873 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640826 4873 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640834 4873 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640848 4873 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640866 4873 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640876 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snd6v\" (UniqueName: \"kubernetes.io/projected/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-kube-api-access-snd6v\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640895 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.640909 4873 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.654008 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.712885 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config" (OuterVolumeSpecName: "web-config") pod "d1070e0c-7518-4d1b-bbb8-e56db1cad28a" (UID: "d1070e0c-7518-4d1b-bbb8-e56db1cad28a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.743256 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") on node \"crc\" " Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.743299 4873 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1070e0c-7518-4d1b-bbb8-e56db1cad28a-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.766676 4873 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.767575 4873 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d") on node "crc" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.821456 4873 scope.go:117] "RemoveContainer" containerID="92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.843970 4873 scope.go:117] "RemoveContainer" containerID="527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.845782 4873 reconciler_common.go:293] "Volume detached for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.862160 4873 scope.go:117] "RemoveContainer" containerID="5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.907847 4873 scope.go:117] "RemoveContainer" containerID="e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573" Feb 19 10:29:04 crc kubenswrapper[4873]: E0219 10:29:04.908347 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573\": container with ID starting with e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573 not found: ID does not exist" containerID="e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.908383 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573"} err="failed to get container status \"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573\": rpc error: code = NotFound desc = could not find container \"e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573\": container with ID starting with e807a1b081c19c6059b7e493eef6d50be38743a8a7b0942d44ce78b41a711573 not found: ID does not exist" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.908411 4873 scope.go:117] "RemoveContainer" containerID="92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980" Feb 19 10:29:04 crc kubenswrapper[4873]: E0219 10:29:04.908645 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980\": container with ID starting with 92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980 not found: ID does not exist" containerID="92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.908677 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980"} err="failed to get container status \"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980\": rpc error: code = NotFound desc = could not find container \"92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980\": container with ID starting with 92436e82b0191e1ee2ed056fcf87daa473f88f898d89c02a039d222108f86980 not found: ID does not exist" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.908695 4873 scope.go:117] "RemoveContainer" containerID="527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f" Feb 19 10:29:04 crc kubenswrapper[4873]: E0219 10:29:04.909042 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f\": container with ID starting with 527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f not found: ID does not exist" containerID="527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.909073 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f"} err="failed to get container status \"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f\": rpc error: code = NotFound desc = could not find container \"527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f\": container with ID starting with 527fb1a245b8b03bec5d82faa5a774b17399f65786d8762162ef556987d4ff0f not found: ID does not exist" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.909092 4873 scope.go:117] "RemoveContainer" containerID="5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52" Feb 19 10:29:04 crc kubenswrapper[4873]: E0219 10:29:04.909575 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52\": container with ID starting with 5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52 not found: ID does not exist" containerID="5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.909602 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52"} err="failed to get container status \"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52\": rpc error: code = NotFound desc = could not find container \"5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52\": container with ID starting with 5dba258377743a5715353e87db5d11c9c4d59b17ccf94ca70c8d425eaaad3a52 not found: ID does not exist" Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.974748 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:04 crc kubenswrapper[4873]: I0219 10:29:04.983166 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.001381 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:05 crc kubenswrapper[4873]: E0219 10:29:05.001863 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="config-reloader" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.001887 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="config-reloader" Feb 19 10:29:05 crc kubenswrapper[4873]: E0219 10:29:05.001929 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="thanos-sidecar" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.001939 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="thanos-sidecar" Feb 19 10:29:05 crc kubenswrapper[4873]: E0219 10:29:05.001956 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="prometheus" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.001964 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="prometheus" Feb 19 10:29:05 crc kubenswrapper[4873]: E0219 10:29:05.001980 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="init-config-reloader" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.001989 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="init-config-reloader" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.002885 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="thanos-sidecar" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.002911 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="config-reloader" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.002945 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" containerName="prometheus" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.005318 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.010402 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.010481 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.010541 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.010541 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.011573 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.011594 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.013379 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-stpz9" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.021226 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.025086 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.153855 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ae630a8f-ee42-4f96-adb9-d18bf713af37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.153972 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.153996 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154026 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154059 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlkc5\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-kube-api-access-xlkc5\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154134 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154165 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154185 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154253 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154361 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154420 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154488 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.154507 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256210 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256255 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256287 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256326 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlkc5\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-kube-api-access-xlkc5\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256344 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256375 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256390 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256422 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256472 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.256499 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257365 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257405 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257489 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257511 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ae630a8f-ee42-4f96-adb9-d18bf713af37-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257517 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.257726 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ae630a8f-ee42-4f96-adb9-d18bf713af37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.262053 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.262338 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.263231 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.264511 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ae630a8f-ee42-4f96-adb9-d18bf713af37-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.264992 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.265180 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.267652 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-config\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.267864 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ae630a8f-ee42-4f96-adb9-d18bf713af37-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.268974 4873 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.269001 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/668a04d4437b4137f130ddea3fc0a68c22db655664b336b39ceb124bf62a44ab/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.278138 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlkc5\" (UniqueName: \"kubernetes.io/projected/ae630a8f-ee42-4f96-adb9-d18bf713af37-kube-api-access-xlkc5\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.321488 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53cd3a8c-3594-457a-b730-9dd2241fe45d\") pod \"prometheus-metric-storage-0\" (UID: \"ae630a8f-ee42-4f96-adb9-d18bf713af37\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.495896 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1070e0c-7518-4d1b-bbb8-e56db1cad28a" path="/var/lib/kubelet/pods/d1070e0c-7518-4d1b-bbb8-e56db1cad28a/volumes" Feb 19 10:29:05 crc kubenswrapper[4873]: I0219 10:29:05.623360 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:06 crc kubenswrapper[4873]: I0219 10:29:06.158361 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:29:06 crc kubenswrapper[4873]: I0219 10:29:06.657779 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerStarted","Data":"431013b032f4126b2744293e55d0bfd32e67adcddf91411f3b1f8d94f1b15cd7"} Feb 19 10:29:09 crc kubenswrapper[4873]: I0219 10:29:09.684484 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerStarted","Data":"dad54f9a71fb953da71ad6657f7fd90a4bb9946b85d99390456690e39db803c2"} Feb 19 10:29:15 crc kubenswrapper[4873]: I0219 10:29:15.738465 4873 generic.go:334] "Generic (PLEG): container finished" podID="ae630a8f-ee42-4f96-adb9-d18bf713af37" containerID="dad54f9a71fb953da71ad6657f7fd90a4bb9946b85d99390456690e39db803c2" exitCode=0 Feb 19 10:29:15 crc kubenswrapper[4873]: I0219 10:29:15.738551 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerDied","Data":"dad54f9a71fb953da71ad6657f7fd90a4bb9946b85d99390456690e39db803c2"} Feb 19 10:29:16 crc kubenswrapper[4873]: I0219 10:29:16.755720 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerStarted","Data":"a52a629df92a35867f1899536b8ad19d872e8ccf3d9767f467994eae589fe27f"} Feb 19 10:29:18 crc kubenswrapper[4873]: I0219 10:29:18.240659 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:29:18 crc kubenswrapper[4873]: I0219 10:29:18.241248 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:29:19 crc kubenswrapper[4873]: I0219 10:29:19.801818 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerStarted","Data":"7d28acf3957d0a323b9145c1139424486a892430cccbdca5fddf144d4f3ea371"} Feb 19 10:29:19 crc kubenswrapper[4873]: I0219 10:29:19.802210 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ae630a8f-ee42-4f96-adb9-d18bf713af37","Type":"ContainerStarted","Data":"7d779ac110a22a90f33893509862674bc0b9550c83fc6e2befa3abc140c292fd"} Feb 19 10:29:19 crc kubenswrapper[4873]: I0219 10:29:19.852508 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.852484569 podStartE2EDuration="15.852484569s" podCreationTimestamp="2026-02-19 10:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:19.845751319 +0000 UTC m=+2669.135182957" watchObservedRunningTime="2026-02-19 10:29:19.852484569 +0000 UTC m=+2669.141916207" Feb 19 10:29:20 crc kubenswrapper[4873]: I0219 10:29:20.624908 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:20 crc kubenswrapper[4873]: I0219 10:29:20.625485 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:20 crc kubenswrapper[4873]: I0219 10:29:20.633655 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:20 crc kubenswrapper[4873]: I0219 10:29:20.819439 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:29:48 crc kubenswrapper[4873]: I0219 10:29:48.240916 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:29:48 crc kubenswrapper[4873]: I0219 10:29:48.241657 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.201797 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.203726 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.206753 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.207443 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.207930 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.208353 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5bdht" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.213577 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375334 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375388 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375410 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375467 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375552 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375571 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375595 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375615 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.375639 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqm6\" (UniqueName: \"kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.477443 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.477835 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.477978 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478130 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478306 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqm6\" (UniqueName: \"kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478525 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478659 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478785 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478335 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.478977 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.479316 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.479504 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.479702 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.480303 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.484488 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.485242 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.485342 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.504365 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqm6\" (UniqueName: \"kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.513495 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.522802 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:29:52 crc kubenswrapper[4873]: I0219 10:29:52.974595 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:29:53 crc kubenswrapper[4873]: I0219 10:29:53.126725 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e5a79da-a068-4a68-ba79-6719ea0fb353","Type":"ContainerStarted","Data":"2ab42e52f993d6514497f49e7da17659fa93e4ac5da7295a0f0f52c753b83b71"} Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.141067 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k"] Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.143528 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.147456 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.147602 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.157159 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k"] Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.244012 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.244178 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvtm\" (UniqueName: \"kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.244218 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.347921 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.348155 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bvtm\" (UniqueName: \"kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.348213 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.353805 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.363880 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bvtm\" (UniqueName: \"kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.368956 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume\") pod \"collect-profiles-29524950-pp89k\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:00 crc kubenswrapper[4873]: I0219 10:30:00.471020 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:02 crc kubenswrapper[4873]: I0219 10:30:02.685340 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k"] Feb 19 10:30:03 crc kubenswrapper[4873]: I0219 10:30:03.228676 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" event={"ID":"9e250d05-a293-4a3c-8658-99d1ae2dc894","Type":"ContainerStarted","Data":"97237f992c0b70ff79ac5f913c59bb566d1f47e053b9c438613bd77bb3e8a5fe"} Feb 19 10:30:03 crc kubenswrapper[4873]: I0219 10:30:03.229010 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" event={"ID":"9e250d05-a293-4a3c-8658-99d1ae2dc894","Type":"ContainerStarted","Data":"af7c4713ce82bd54098f28780ef63c539c161e9088aec99b9673ec780d2c6e07"} Feb 19 10:30:03 crc kubenswrapper[4873]: I0219 10:30:03.255412 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" podStartSLOduration=3.255392707 podStartE2EDuration="3.255392707s" podCreationTimestamp="2026-02-19 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:03.242286008 +0000 UTC m=+2712.531717646" watchObservedRunningTime="2026-02-19 10:30:03.255392707 +0000 UTC m=+2712.544824345" Feb 19 10:30:04 crc kubenswrapper[4873]: I0219 10:30:04.238380 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e5a79da-a068-4a68-ba79-6719ea0fb353","Type":"ContainerStarted","Data":"edd9b7584d145cddbcf9d8449ca8d5546aa8224b7f3731235eeab85ccb091862"} Feb 19 10:30:04 crc kubenswrapper[4873]: I0219 10:30:04.241210 4873 generic.go:334] "Generic (PLEG): container finished" podID="9e250d05-a293-4a3c-8658-99d1ae2dc894" containerID="97237f992c0b70ff79ac5f913c59bb566d1f47e053b9c438613bd77bb3e8a5fe" exitCode=0 Feb 19 10:30:04 crc kubenswrapper[4873]: I0219 10:30:04.241259 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" event={"ID":"9e250d05-a293-4a3c-8658-99d1ae2dc894","Type":"ContainerDied","Data":"97237f992c0b70ff79ac5f913c59bb566d1f47e053b9c438613bd77bb3e8a5fe"} Feb 19 10:30:04 crc kubenswrapper[4873]: I0219 10:30:04.256736 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.99174221 podStartE2EDuration="13.256714072s" podCreationTimestamp="2026-02-19 10:29:51 +0000 UTC" firstStartedPulling="2026-02-19 10:29:52.978986846 +0000 UTC m=+2702.268418484" lastFinishedPulling="2026-02-19 10:30:02.243958708 +0000 UTC m=+2711.533390346" observedRunningTime="2026-02-19 10:30:04.25463747 +0000 UTC m=+2713.544069128" watchObservedRunningTime="2026-02-19 10:30:04.256714072 +0000 UTC m=+2713.546145720" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.637310 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.755778 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw"] Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.761511 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume\") pod \"9e250d05-a293-4a3c-8658-99d1ae2dc894\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.761761 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bvtm\" (UniqueName: \"kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm\") pod \"9e250d05-a293-4a3c-8658-99d1ae2dc894\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.761949 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e250d05-a293-4a3c-8658-99d1ae2dc894" (UID: "9e250d05-a293-4a3c-8658-99d1ae2dc894"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.762625 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume\") pod \"9e250d05-a293-4a3c-8658-99d1ae2dc894\" (UID: \"9e250d05-a293-4a3c-8658-99d1ae2dc894\") " Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.763296 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e250d05-a293-4a3c-8658-99d1ae2dc894-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.765324 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-jqdfw"] Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.775336 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm" (OuterVolumeSpecName: "kube-api-access-4bvtm") pod "9e250d05-a293-4a3c-8658-99d1ae2dc894" (UID: "9e250d05-a293-4a3c-8658-99d1ae2dc894"). InnerVolumeSpecName "kube-api-access-4bvtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.775984 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e250d05-a293-4a3c-8658-99d1ae2dc894" (UID: "9e250d05-a293-4a3c-8658-99d1ae2dc894"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.865681 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bvtm\" (UniqueName: \"kubernetes.io/projected/9e250d05-a293-4a3c-8658-99d1ae2dc894-kube-api-access-4bvtm\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:05 crc kubenswrapper[4873]: I0219 10:30:05.865708 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e250d05-a293-4a3c-8658-99d1ae2dc894-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:06 crc kubenswrapper[4873]: I0219 10:30:06.265094 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" event={"ID":"9e250d05-a293-4a3c-8658-99d1ae2dc894","Type":"ContainerDied","Data":"af7c4713ce82bd54098f28780ef63c539c161e9088aec99b9673ec780d2c6e07"} Feb 19 10:30:06 crc kubenswrapper[4873]: I0219 10:30:06.265150 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af7c4713ce82bd54098f28780ef63c539c161e9088aec99b9673ec780d2c6e07" Feb 19 10:30:06 crc kubenswrapper[4873]: I0219 10:30:06.265208 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k" Feb 19 10:30:07 crc kubenswrapper[4873]: I0219 10:30:07.496970 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de77b9aa-b558-4431-b116-5e1e1cc116f3" path="/var/lib/kubelet/pods/de77b9aa-b558-4431-b116-5e1e1cc116f3/volumes" Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.240214 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.240730 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.240787 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.241628 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.241680 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20" gracePeriod=600 Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.443800 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20" exitCode=0 Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.443892 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20"} Feb 19 10:30:18 crc kubenswrapper[4873]: I0219 10:30:18.444091 4873 scope.go:117] "RemoveContainer" containerID="dca79e332cae1f6387c5683fc865ffa663ef7876e44a7e998dd322e8f3f6a806" Feb 19 10:30:19 crc kubenswrapper[4873]: I0219 10:30:19.453869 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b"} Feb 19 10:31:02 crc kubenswrapper[4873]: I0219 10:31:02.180193 4873 scope.go:117] "RemoveContainer" containerID="e60bc2f916aff75454f8db4d5b15c6ae005baebfdcb79c0c87df06d3a9db5142" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.821144 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:29 crc kubenswrapper[4873]: E0219 10:31:29.822242 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e250d05-a293-4a3c-8658-99d1ae2dc894" containerName="collect-profiles" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.822258 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e250d05-a293-4a3c-8658-99d1ae2dc894" containerName="collect-profiles" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.822509 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e250d05-a293-4a3c-8658-99d1ae2dc894" containerName="collect-profiles" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.824185 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.845926 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.937830 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.937922 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44vst\" (UniqueName: \"kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:29 crc kubenswrapper[4873]: I0219 10:31:29.938138 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.040166 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.040229 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.040288 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44vst\" (UniqueName: \"kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.041073 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.041078 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.061124 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44vst\" (UniqueName: \"kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst\") pod \"community-operators-rd7vw\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.149503 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:30 crc kubenswrapper[4873]: I0219 10:31:30.723373 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:31 crc kubenswrapper[4873]: I0219 10:31:31.110930 4873 generic.go:334] "Generic (PLEG): container finished" podID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerID="0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8" exitCode=0 Feb 19 10:31:31 crc kubenswrapper[4873]: I0219 10:31:31.111113 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerDied","Data":"0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8"} Feb 19 10:31:31 crc kubenswrapper[4873]: I0219 10:31:31.111356 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerStarted","Data":"4b77c8207015a73521213a8d125fc1fc57465ec7ba0af072e6fc7af04a1b9a40"} Feb 19 10:31:31 crc kubenswrapper[4873]: I0219 10:31:31.113963 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:31:32 crc kubenswrapper[4873]: I0219 10:31:32.120494 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerStarted","Data":"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174"} Feb 19 10:31:34 crc kubenswrapper[4873]: I0219 10:31:34.138899 4873 generic.go:334] "Generic (PLEG): container finished" podID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerID="cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174" exitCode=0 Feb 19 10:31:34 crc kubenswrapper[4873]: I0219 10:31:34.138978 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerDied","Data":"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174"} Feb 19 10:31:35 crc kubenswrapper[4873]: I0219 10:31:35.150243 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerStarted","Data":"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a"} Feb 19 10:31:35 crc kubenswrapper[4873]: I0219 10:31:35.170868 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rd7vw" podStartSLOduration=2.480003354 podStartE2EDuration="6.17084718s" podCreationTimestamp="2026-02-19 10:31:29 +0000 UTC" firstStartedPulling="2026-02-19 10:31:31.113758509 +0000 UTC m=+2800.403190147" lastFinishedPulling="2026-02-19 10:31:34.804602335 +0000 UTC m=+2804.094033973" observedRunningTime="2026-02-19 10:31:35.164046599 +0000 UTC m=+2804.453478237" watchObservedRunningTime="2026-02-19 10:31:35.17084718 +0000 UTC m=+2804.460278818" Feb 19 10:31:40 crc kubenswrapper[4873]: I0219 10:31:40.149747 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:40 crc kubenswrapper[4873]: I0219 10:31:40.150299 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:40 crc kubenswrapper[4873]: I0219 10:31:40.209515 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:40 crc kubenswrapper[4873]: I0219 10:31:40.284598 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:40 crc kubenswrapper[4873]: I0219 10:31:40.451432 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.224023 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rd7vw" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="registry-server" containerID="cri-o://f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a" gracePeriod=2 Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.707385 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.820846 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content\") pod \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.821148 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities\") pod \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.821199 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44vst\" (UniqueName: \"kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst\") pod \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\" (UID: \"4ae001dc-8355-422d-909f-d7eb1f4e80fe\") " Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.821751 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities" (OuterVolumeSpecName: "utilities") pod "4ae001dc-8355-422d-909f-d7eb1f4e80fe" (UID: "4ae001dc-8355-422d-909f-d7eb1f4e80fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.836473 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst" (OuterVolumeSpecName: "kube-api-access-44vst") pod "4ae001dc-8355-422d-909f-d7eb1f4e80fe" (UID: "4ae001dc-8355-422d-909f-d7eb1f4e80fe"). InnerVolumeSpecName "kube-api-access-44vst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.880464 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ae001dc-8355-422d-909f-d7eb1f4e80fe" (UID: "4ae001dc-8355-422d-909f-d7eb1f4e80fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.924849 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44vst\" (UniqueName: \"kubernetes.io/projected/4ae001dc-8355-422d-909f-d7eb1f4e80fe-kube-api-access-44vst\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.924905 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:42 crc kubenswrapper[4873]: I0219 10:31:42.924920 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae001dc-8355-422d-909f-d7eb1f4e80fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.234985 4873 generic.go:334] "Generic (PLEG): container finished" podID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerID="f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a" exitCode=0 Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.235024 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerDied","Data":"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a"} Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.235054 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rd7vw" event={"ID":"4ae001dc-8355-422d-909f-d7eb1f4e80fe","Type":"ContainerDied","Data":"4b77c8207015a73521213a8d125fc1fc57465ec7ba0af072e6fc7af04a1b9a40"} Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.235073 4873 scope.go:117] "RemoveContainer" containerID="f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.235080 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rd7vw" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.256297 4873 scope.go:117] "RemoveContainer" containerID="cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.291176 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.307485 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rd7vw"] Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.323581 4873 scope.go:117] "RemoveContainer" containerID="0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.350358 4873 scope.go:117] "RemoveContainer" containerID="f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a" Feb 19 10:31:43 crc kubenswrapper[4873]: E0219 10:31:43.350853 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a\": container with ID starting with f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a not found: ID does not exist" containerID="f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.350889 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a"} err="failed to get container status \"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a\": rpc error: code = NotFound desc = could not find container \"f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a\": container with ID starting with f09f5abde9388e17663169e87798acd45cbb400c6bc4af9c08536f553182545a not found: ID does not exist" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.350910 4873 scope.go:117] "RemoveContainer" containerID="cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174" Feb 19 10:31:43 crc kubenswrapper[4873]: E0219 10:31:43.351262 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174\": container with ID starting with cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174 not found: ID does not exist" containerID="cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.351290 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174"} err="failed to get container status \"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174\": rpc error: code = NotFound desc = could not find container \"cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174\": container with ID starting with cd25869832d3ec5c774490990580f20eda364ccd36b96508fe7b8c2414ec8174 not found: ID does not exist" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.351306 4873 scope.go:117] "RemoveContainer" containerID="0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8" Feb 19 10:31:43 crc kubenswrapper[4873]: E0219 10:31:43.351560 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8\": container with ID starting with 0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8 not found: ID does not exist" containerID="0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.351609 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8"} err="failed to get container status \"0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8\": rpc error: code = NotFound desc = could not find container \"0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8\": container with ID starting with 0727d3766c2b69920ee16cacf47fa390d5337877553fd613b0bde11e96028bc8 not found: ID does not exist" Feb 19 10:31:43 crc kubenswrapper[4873]: I0219 10:31:43.495799 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" path="/var/lib/kubelet/pods/4ae001dc-8355-422d-909f-d7eb1f4e80fe/volumes" Feb 19 10:32:18 crc kubenswrapper[4873]: I0219 10:32:18.241031 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:32:18 crc kubenswrapper[4873]: I0219 10:32:18.241687 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:32:48 crc kubenswrapper[4873]: I0219 10:32:48.240750 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:32:48 crc kubenswrapper[4873]: I0219 10:32:48.241367 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.171888 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:32:54 crc kubenswrapper[4873]: E0219 10:32:54.174023 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="registry-server" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.174069 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="registry-server" Feb 19 10:32:54 crc kubenswrapper[4873]: E0219 10:32:54.174094 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="extract-utilities" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.174142 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="extract-utilities" Feb 19 10:32:54 crc kubenswrapper[4873]: E0219 10:32:54.174165 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="extract-content" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.174175 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="extract-content" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.176049 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae001dc-8355-422d-909f-d7eb1f4e80fe" containerName="registry-server" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.179609 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.192420 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.328822 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnpq\" (UniqueName: \"kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.328926 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.329053 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.431339 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.431517 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.431646 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnpq\" (UniqueName: \"kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.431914 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.431951 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.459084 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnpq\" (UniqueName: \"kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq\") pod \"redhat-operators-82qzr\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:54 crc kubenswrapper[4873]: I0219 10:32:54.520999 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:32:55 crc kubenswrapper[4873]: I0219 10:32:55.031765 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:32:55 crc kubenswrapper[4873]: I0219 10:32:55.937633 4873 generic.go:334] "Generic (PLEG): container finished" podID="78f117c0-1029-4b43-ab4c-486312acf531" containerID="20bead4efa3d8df668099d6f92638ebd2934b3a50e58e9767728bce58454d134" exitCode=0 Feb 19 10:32:55 crc kubenswrapper[4873]: I0219 10:32:55.937704 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerDied","Data":"20bead4efa3d8df668099d6f92638ebd2934b3a50e58e9767728bce58454d134"} Feb 19 10:32:55 crc kubenswrapper[4873]: I0219 10:32:55.937916 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerStarted","Data":"23eeef1ed125c5cde8dc4bc331e27cdd6957f05bc2af3732d543cd4f4e9e6072"} Feb 19 10:32:57 crc kubenswrapper[4873]: I0219 10:32:57.965121 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerStarted","Data":"8fc2239af333bdc91dd32de0cb27c8416c88fd0a92fe40bc7d18bcc44f17dba7"} Feb 19 10:33:03 crc kubenswrapper[4873]: I0219 10:33:03.012885 4873 generic.go:334] "Generic (PLEG): container finished" podID="78f117c0-1029-4b43-ab4c-486312acf531" containerID="8fc2239af333bdc91dd32de0cb27c8416c88fd0a92fe40bc7d18bcc44f17dba7" exitCode=0 Feb 19 10:33:03 crc kubenswrapper[4873]: I0219 10:33:03.012981 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerDied","Data":"8fc2239af333bdc91dd32de0cb27c8416c88fd0a92fe40bc7d18bcc44f17dba7"} Feb 19 10:33:04 crc kubenswrapper[4873]: I0219 10:33:04.026621 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerStarted","Data":"1784bbce73148192c9784e04f030b74889d805efeb3cb76e27e0dcf0b45ea58b"} Feb 19 10:33:04 crc kubenswrapper[4873]: I0219 10:33:04.046837 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82qzr" podStartSLOduration=2.540697879 podStartE2EDuration="10.046818345s" podCreationTimestamp="2026-02-19 10:32:54 +0000 UTC" firstStartedPulling="2026-02-19 10:32:55.940594934 +0000 UTC m=+2885.230026572" lastFinishedPulling="2026-02-19 10:33:03.4467154 +0000 UTC m=+2892.736147038" observedRunningTime="2026-02-19 10:33:04.046182589 +0000 UTC m=+2893.335614237" watchObservedRunningTime="2026-02-19 10:33:04.046818345 +0000 UTC m=+2893.336249983" Feb 19 10:33:04 crc kubenswrapper[4873]: I0219 10:33:04.521847 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:04 crc kubenswrapper[4873]: I0219 10:33:04.522287 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:05 crc kubenswrapper[4873]: I0219 10:33:05.570356 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82qzr" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="registry-server" probeResult="failure" output=< Feb 19 10:33:05 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:33:05 crc kubenswrapper[4873]: > Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.152374 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.157164 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.161920 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.285529 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.285862 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6ws\" (UniqueName: \"kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.286225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.387927 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6ws\" (UniqueName: \"kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.388073 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.388175 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.388837 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.388858 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.410980 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6ws\" (UniqueName: \"kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws\") pod \"redhat-marketplace-2jmxn\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:10 crc kubenswrapper[4873]: I0219 10:33:10.505562 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:11 crc kubenswrapper[4873]: I0219 10:33:11.026537 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:11 crc kubenswrapper[4873]: I0219 10:33:11.098016 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerStarted","Data":"3d648f78e1bea540f365ed9db4323aba5f331a0de290c0949ac76e5f5c3d0283"} Feb 19 10:33:12 crc kubenswrapper[4873]: I0219 10:33:12.110333 4873 generic.go:334] "Generic (PLEG): container finished" podID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerID="e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7" exitCode=0 Feb 19 10:33:12 crc kubenswrapper[4873]: I0219 10:33:12.110419 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerDied","Data":"e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7"} Feb 19 10:33:13 crc kubenswrapper[4873]: I0219 10:33:13.131478 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerStarted","Data":"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55"} Feb 19 10:33:14 crc kubenswrapper[4873]: I0219 10:33:14.151500 4873 generic.go:334] "Generic (PLEG): container finished" podID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerID="58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55" exitCode=0 Feb 19 10:33:14 crc kubenswrapper[4873]: I0219 10:33:14.151572 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerDied","Data":"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55"} Feb 19 10:33:14 crc kubenswrapper[4873]: I0219 10:33:14.571821 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:14 crc kubenswrapper[4873]: I0219 10:33:14.638085 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:15 crc kubenswrapper[4873]: I0219 10:33:15.165287 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerStarted","Data":"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45"} Feb 19 10:33:15 crc kubenswrapper[4873]: I0219 10:33:15.189688 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2jmxn" podStartSLOduration=2.764588609 podStartE2EDuration="5.189670107s" podCreationTimestamp="2026-02-19 10:33:10 +0000 UTC" firstStartedPulling="2026-02-19 10:33:12.113673058 +0000 UTC m=+2901.403104696" lastFinishedPulling="2026-02-19 10:33:14.538754556 +0000 UTC m=+2903.828186194" observedRunningTime="2026-02-19 10:33:15.184786314 +0000 UTC m=+2904.474217952" watchObservedRunningTime="2026-02-19 10:33:15.189670107 +0000 UTC m=+2904.479101735" Feb 19 10:33:16 crc kubenswrapper[4873]: I0219 10:33:16.933712 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:33:16 crc kubenswrapper[4873]: I0219 10:33:16.935870 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82qzr" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="registry-server" containerID="cri-o://1784bbce73148192c9784e04f030b74889d805efeb3cb76e27e0dcf0b45ea58b" gracePeriod=2 Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.192986 4873 generic.go:334] "Generic (PLEG): container finished" podID="78f117c0-1029-4b43-ab4c-486312acf531" containerID="1784bbce73148192c9784e04f030b74889d805efeb3cb76e27e0dcf0b45ea58b" exitCode=0 Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.193281 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerDied","Data":"1784bbce73148192c9784e04f030b74889d805efeb3cb76e27e0dcf0b45ea58b"} Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.400732 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.455825 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content\") pod \"78f117c0-1029-4b43-ab4c-486312acf531\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.455904 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnpq\" (UniqueName: \"kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq\") pod \"78f117c0-1029-4b43-ab4c-486312acf531\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.455937 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities\") pod \"78f117c0-1029-4b43-ab4c-486312acf531\" (UID: \"78f117c0-1029-4b43-ab4c-486312acf531\") " Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.456889 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities" (OuterVolumeSpecName: "utilities") pod "78f117c0-1029-4b43-ab4c-486312acf531" (UID: "78f117c0-1029-4b43-ab4c-486312acf531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.477000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq" (OuterVolumeSpecName: "kube-api-access-kgnpq") pod "78f117c0-1029-4b43-ab4c-486312acf531" (UID: "78f117c0-1029-4b43-ab4c-486312acf531"). InnerVolumeSpecName "kube-api-access-kgnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.559633 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnpq\" (UniqueName: \"kubernetes.io/projected/78f117c0-1029-4b43-ab4c-486312acf531-kube-api-access-kgnpq\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.559671 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.590859 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78f117c0-1029-4b43-ab4c-486312acf531" (UID: "78f117c0-1029-4b43-ab4c-486312acf531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:17 crc kubenswrapper[4873]: I0219 10:33:17.661409 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f117c0-1029-4b43-ab4c-486312acf531-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.204475 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82qzr" event={"ID":"78f117c0-1029-4b43-ab4c-486312acf531","Type":"ContainerDied","Data":"23eeef1ed125c5cde8dc4bc331e27cdd6957f05bc2af3732d543cd4f4e9e6072"} Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.204549 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82qzr" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.204821 4873 scope.go:117] "RemoveContainer" containerID="1784bbce73148192c9784e04f030b74889d805efeb3cb76e27e0dcf0b45ea58b" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.227715 4873 scope.go:117] "RemoveContainer" containerID="8fc2239af333bdc91dd32de0cb27c8416c88fd0a92fe40bc7d18bcc44f17dba7" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.240808 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.240858 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.240907 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.241665 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.241723 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" gracePeriod=600 Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.242090 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.254633 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82qzr"] Feb 19 10:33:18 crc kubenswrapper[4873]: I0219 10:33:18.268851 4873 scope.go:117] "RemoveContainer" containerID="20bead4efa3d8df668099d6f92638ebd2934b3a50e58e9767728bce58454d134" Feb 19 10:33:18 crc kubenswrapper[4873]: E0219 10:33:18.379071 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:33:19 crc kubenswrapper[4873]: I0219 10:33:19.217215 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" exitCode=0 Feb 19 10:33:19 crc kubenswrapper[4873]: I0219 10:33:19.217285 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b"} Feb 19 10:33:19 crc kubenswrapper[4873]: I0219 10:33:19.217554 4873 scope.go:117] "RemoveContainer" containerID="edec24981d97e1beda63a2d9013b9abdb1a1dbeed2c76ab65161659d51d3be20" Feb 19 10:33:19 crc kubenswrapper[4873]: I0219 10:33:19.218243 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:33:19 crc kubenswrapper[4873]: E0219 10:33:19.218594 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:33:19 crc kubenswrapper[4873]: I0219 10:33:19.495312 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f117c0-1029-4b43-ab4c-486312acf531" path="/var/lib/kubelet/pods/78f117c0-1029-4b43-ab4c-486312acf531/volumes" Feb 19 10:33:20 crc kubenswrapper[4873]: I0219 10:33:20.506040 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:20 crc kubenswrapper[4873]: I0219 10:33:20.506343 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:20 crc kubenswrapper[4873]: I0219 10:33:20.552913 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:21 crc kubenswrapper[4873]: I0219 10:33:21.305818 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:21 crc kubenswrapper[4873]: I0219 10:33:21.719062 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.264604 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2jmxn" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="registry-server" containerID="cri-o://2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45" gracePeriod=2 Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.730798 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.784698 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content\") pod \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.784772 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities\") pod \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.784827 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6ws\" (UniqueName: \"kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws\") pod \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\" (UID: \"0661a6c4-6ace-47e5-b3de-bcee0bda9714\") " Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.787094 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities" (OuterVolumeSpecName: "utilities") pod "0661a6c4-6ace-47e5-b3de-bcee0bda9714" (UID: "0661a6c4-6ace-47e5-b3de-bcee0bda9714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.791781 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws" (OuterVolumeSpecName: "kube-api-access-kj6ws") pod "0661a6c4-6ace-47e5-b3de-bcee0bda9714" (UID: "0661a6c4-6ace-47e5-b3de-bcee0bda9714"). InnerVolumeSpecName "kube-api-access-kj6ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.810541 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0661a6c4-6ace-47e5-b3de-bcee0bda9714" (UID: "0661a6c4-6ace-47e5-b3de-bcee0bda9714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.887186 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.887524 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0661a6c4-6ace-47e5-b3de-bcee0bda9714-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:23 crc kubenswrapper[4873]: I0219 10:33:23.887536 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6ws\" (UniqueName: \"kubernetes.io/projected/0661a6c4-6ace-47e5-b3de-bcee0bda9714-kube-api-access-kj6ws\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.275662 4873 generic.go:334] "Generic (PLEG): container finished" podID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerID="2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45" exitCode=0 Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.275705 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerDied","Data":"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45"} Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.275729 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2jmxn" event={"ID":"0661a6c4-6ace-47e5-b3de-bcee0bda9714","Type":"ContainerDied","Data":"3d648f78e1bea540f365ed9db4323aba5f331a0de290c0949ac76e5f5c3d0283"} Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.275738 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2jmxn" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.275745 4873 scope.go:117] "RemoveContainer" containerID="2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.306075 4873 scope.go:117] "RemoveContainer" containerID="58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.337253 4873 scope.go:117] "RemoveContainer" containerID="e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.337636 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.350515 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2jmxn"] Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.384987 4873 scope.go:117] "RemoveContainer" containerID="2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45" Feb 19 10:33:24 crc kubenswrapper[4873]: E0219 10:33:24.385678 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45\": container with ID starting with 2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45 not found: ID does not exist" containerID="2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.385712 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45"} err="failed to get container status \"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45\": rpc error: code = NotFound desc = could not find container \"2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45\": container with ID starting with 2cec4414aa0c1ffea81367f46dfe60a7afc0ca4a4aaf7c2b3439fdb480eebe45 not found: ID does not exist" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.385734 4873 scope.go:117] "RemoveContainer" containerID="58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55" Feb 19 10:33:24 crc kubenswrapper[4873]: E0219 10:33:24.386006 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55\": container with ID starting with 58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55 not found: ID does not exist" containerID="58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.386021 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55"} err="failed to get container status \"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55\": rpc error: code = NotFound desc = could not find container \"58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55\": container with ID starting with 58616f1a828785e989384245e1f4513a0f70225ce36a2f5c5b7eef196afe6a55 not found: ID does not exist" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.386034 4873 scope.go:117] "RemoveContainer" containerID="e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7" Feb 19 10:33:24 crc kubenswrapper[4873]: E0219 10:33:24.386271 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7\": container with ID starting with e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7 not found: ID does not exist" containerID="e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7" Feb 19 10:33:24 crc kubenswrapper[4873]: I0219 10:33:24.386288 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7"} err="failed to get container status \"e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7\": rpc error: code = NotFound desc = could not find container \"e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7\": container with ID starting with e89c20b48341f8f16f1a8dca5bcf3723a43600c0e43831896fa283627ac561d7 not found: ID does not exist" Feb 19 10:33:25 crc kubenswrapper[4873]: I0219 10:33:25.498156 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" path="/var/lib/kubelet/pods/0661a6c4-6ace-47e5-b3de-bcee0bda9714/volumes" Feb 19 10:33:32 crc kubenswrapper[4873]: I0219 10:33:32.484716 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:33:32 crc kubenswrapper[4873]: E0219 10:33:32.485561 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:33:47 crc kubenswrapper[4873]: I0219 10:33:47.485258 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:33:47 crc kubenswrapper[4873]: E0219 10:33:47.486000 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:33:58 crc kubenswrapper[4873]: I0219 10:33:58.484933 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:33:58 crc kubenswrapper[4873]: E0219 10:33:58.485750 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:34:11 crc kubenswrapper[4873]: I0219 10:34:11.493409 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:34:11 crc kubenswrapper[4873]: E0219 10:34:11.494451 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:34:23 crc kubenswrapper[4873]: I0219 10:34:23.485557 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:34:23 crc kubenswrapper[4873]: E0219 10:34:23.486550 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:34:36 crc kubenswrapper[4873]: I0219 10:34:36.484048 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:34:36 crc kubenswrapper[4873]: E0219 10:34:36.485232 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:34:48 crc kubenswrapper[4873]: I0219 10:34:48.484869 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:34:48 crc kubenswrapper[4873]: E0219 10:34:48.485579 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:35:00 crc kubenswrapper[4873]: I0219 10:35:00.484650 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:35:00 crc kubenswrapper[4873]: E0219 10:35:00.485646 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:35:14 crc kubenswrapper[4873]: I0219 10:35:14.484633 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:35:14 crc kubenswrapper[4873]: E0219 10:35:14.485274 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:35:26 crc kubenswrapper[4873]: I0219 10:35:26.485809 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:35:26 crc kubenswrapper[4873]: E0219 10:35:26.486508 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:35:41 crc kubenswrapper[4873]: I0219 10:35:41.492114 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:35:41 crc kubenswrapper[4873]: E0219 10:35:41.492954 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:35:53 crc kubenswrapper[4873]: I0219 10:35:53.484212 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:35:53 crc kubenswrapper[4873]: E0219 10:35:53.484939 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:07 crc kubenswrapper[4873]: I0219 10:36:07.484801 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:36:07 crc kubenswrapper[4873]: E0219 10:36:07.487281 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:19 crc kubenswrapper[4873]: I0219 10:36:19.485151 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:36:19 crc kubenswrapper[4873]: E0219 10:36:19.485967 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:32 crc kubenswrapper[4873]: I0219 10:36:32.484859 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:36:32 crc kubenswrapper[4873]: E0219 10:36:32.485591 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:44 crc kubenswrapper[4873]: I0219 10:36:44.484695 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:36:44 crc kubenswrapper[4873]: E0219 10:36:44.485501 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.534646 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535157 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="extract-content" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535174 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="extract-content" Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535218 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535234 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535254 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="extract-utilities" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535263 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="extract-utilities" Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535280 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535287 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535302 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="extract-content" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535308 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="extract-content" Feb 19 10:36:45 crc kubenswrapper[4873]: E0219 10:36:45.535330 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="extract-utilities" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535337 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="extract-utilities" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535610 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f117c0-1029-4b43-ab4c-486312acf531" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.535635 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0661a6c4-6ace-47e5-b3de-bcee0bda9714" containerName="registry-server" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.538440 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.572565 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.691633 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.692190 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlm9n\" (UniqueName: \"kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.692434 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.794707 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlm9n\" (UniqueName: \"kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.794827 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.794979 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.795383 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.795559 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.815357 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlm9n\" (UniqueName: \"kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n\") pod \"certified-operators-xvs9k\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:45 crc kubenswrapper[4873]: I0219 10:36:45.862934 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:46 crc kubenswrapper[4873]: I0219 10:36:46.423070 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:46 crc kubenswrapper[4873]: I0219 10:36:46.730047 4873 generic.go:334] "Generic (PLEG): container finished" podID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerID="595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e" exitCode=0 Feb 19 10:36:46 crc kubenswrapper[4873]: I0219 10:36:46.730216 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerDied","Data":"595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e"} Feb 19 10:36:46 crc kubenswrapper[4873]: I0219 10:36:46.730406 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerStarted","Data":"dabe8dc4309c8bccdc31acf5b6a881c32a5eaff06ffd335baa46a3af43b8b798"} Feb 19 10:36:46 crc kubenswrapper[4873]: I0219 10:36:46.732083 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:36:48 crc kubenswrapper[4873]: I0219 10:36:48.749067 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerStarted","Data":"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821"} Feb 19 10:36:49 crc kubenswrapper[4873]: I0219 10:36:49.759013 4873 generic.go:334] "Generic (PLEG): container finished" podID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerID="195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821" exitCode=0 Feb 19 10:36:49 crc kubenswrapper[4873]: I0219 10:36:49.759262 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerDied","Data":"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821"} Feb 19 10:36:50 crc kubenswrapper[4873]: I0219 10:36:50.778994 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerStarted","Data":"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf"} Feb 19 10:36:50 crc kubenswrapper[4873]: I0219 10:36:50.804306 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvs9k" podStartSLOduration=2.396104145 podStartE2EDuration="5.804288137s" podCreationTimestamp="2026-02-19 10:36:45 +0000 UTC" firstStartedPulling="2026-02-19 10:36:46.731807013 +0000 UTC m=+3116.021238651" lastFinishedPulling="2026-02-19 10:36:50.139991005 +0000 UTC m=+3119.429422643" observedRunningTime="2026-02-19 10:36:50.79647196 +0000 UTC m=+3120.085903598" watchObservedRunningTime="2026-02-19 10:36:50.804288137 +0000 UTC m=+3120.093719775" Feb 19 10:36:55 crc kubenswrapper[4873]: I0219 10:36:55.484958 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:36:55 crc kubenswrapper[4873]: E0219 10:36:55.486349 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:36:55 crc kubenswrapper[4873]: I0219 10:36:55.863353 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:55 crc kubenswrapper[4873]: I0219 10:36:55.863777 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:55 crc kubenswrapper[4873]: I0219 10:36:55.918056 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:56 crc kubenswrapper[4873]: I0219 10:36:56.884165 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:56 crc kubenswrapper[4873]: I0219 10:36:56.943783 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:58 crc kubenswrapper[4873]: I0219 10:36:58.850249 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvs9k" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="registry-server" containerID="cri-o://bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf" gracePeriod=2 Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.343858 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.502758 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities\") pod \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.502801 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlm9n\" (UniqueName: \"kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n\") pod \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.502962 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content\") pod \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\" (UID: \"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd\") " Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.506905 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities" (OuterVolumeSpecName: "utilities") pod "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" (UID: "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.512478 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n" (OuterVolumeSpecName: "kube-api-access-vlm9n") pod "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" (UID: "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd"). InnerVolumeSpecName "kube-api-access-vlm9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.555040 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" (UID: "70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.606572 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.606849 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.606861 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlm9n\" (UniqueName: \"kubernetes.io/projected/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd-kube-api-access-vlm9n\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.863115 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvs9k" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.863169 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerDied","Data":"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf"} Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.863241 4873 scope.go:117] "RemoveContainer" containerID="bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.862968 4873 generic.go:334] "Generic (PLEG): container finished" podID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerID="bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf" exitCode=0 Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.864320 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvs9k" event={"ID":"70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd","Type":"ContainerDied","Data":"dabe8dc4309c8bccdc31acf5b6a881c32a5eaff06ffd335baa46a3af43b8b798"} Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.892618 4873 scope.go:117] "RemoveContainer" containerID="195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.906381 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.915333 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvs9k"] Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.928964 4873 scope.go:117] "RemoveContainer" containerID="595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.970952 4873 scope.go:117] "RemoveContainer" containerID="bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf" Feb 19 10:36:59 crc kubenswrapper[4873]: E0219 10:36:59.971617 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf\": container with ID starting with bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf not found: ID does not exist" containerID="bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.971682 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf"} err="failed to get container status \"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf\": rpc error: code = NotFound desc = could not find container \"bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf\": container with ID starting with bde6b244ff8c35b06ab9e678a9ef0296297322519a75b1850656167406d50fbf not found: ID does not exist" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.971718 4873 scope.go:117] "RemoveContainer" containerID="195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821" Feb 19 10:36:59 crc kubenswrapper[4873]: E0219 10:36:59.972210 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821\": container with ID starting with 195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821 not found: ID does not exist" containerID="195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.972262 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821"} err="failed to get container status \"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821\": rpc error: code = NotFound desc = could not find container \"195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821\": container with ID starting with 195edfdb7a76b47f01d4a03025fadd56103a54b406afbcead83f16f6187d6821 not found: ID does not exist" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.972298 4873 scope.go:117] "RemoveContainer" containerID="595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e" Feb 19 10:36:59 crc kubenswrapper[4873]: E0219 10:36:59.972993 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e\": container with ID starting with 595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e not found: ID does not exist" containerID="595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e" Feb 19 10:36:59 crc kubenswrapper[4873]: I0219 10:36:59.973071 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e"} err="failed to get container status \"595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e\": rpc error: code = NotFound desc = could not find container \"595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e\": container with ID starting with 595867504c3014b4b9ad93c216f9e475546590ca7b34fa0cf9a413a0376f5f5e not found: ID does not exist" Feb 19 10:37:01 crc kubenswrapper[4873]: I0219 10:37:01.499674 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" path="/var/lib/kubelet/pods/70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd/volumes" Feb 19 10:37:09 crc kubenswrapper[4873]: I0219 10:37:09.488010 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:37:09 crc kubenswrapper[4873]: E0219 10:37:09.488920 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:37:20 crc kubenswrapper[4873]: I0219 10:37:20.483669 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:37:20 crc kubenswrapper[4873]: E0219 10:37:20.484503 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:37:32 crc kubenswrapper[4873]: I0219 10:37:32.484141 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:37:32 crc kubenswrapper[4873]: E0219 10:37:32.484860 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:37:44 crc kubenswrapper[4873]: I0219 10:37:44.486405 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:37:44 crc kubenswrapper[4873]: E0219 10:37:44.488524 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:37:58 crc kubenswrapper[4873]: I0219 10:37:58.484757 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:37:58 crc kubenswrapper[4873]: E0219 10:37:58.485540 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:38:12 crc kubenswrapper[4873]: I0219 10:38:12.484766 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:38:12 crc kubenswrapper[4873]: E0219 10:38:12.485807 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:38:27 crc kubenswrapper[4873]: I0219 10:38:27.484518 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:38:28 crc kubenswrapper[4873]: I0219 10:38:28.730781 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f"} Feb 19 10:40:48 crc kubenswrapper[4873]: I0219 10:40:48.240769 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:40:48 crc kubenswrapper[4873]: I0219 10:40:48.241324 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:41:18 crc kubenswrapper[4873]: I0219 10:41:18.240844 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:41:18 crc kubenswrapper[4873]: I0219 10:41:18.242076 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:41:48 crc kubenswrapper[4873]: I0219 10:41:48.240612 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:41:48 crc kubenswrapper[4873]: I0219 10:41:48.241318 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:41:48 crc kubenswrapper[4873]: I0219 10:41:48.241368 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:41:48 crc kubenswrapper[4873]: I0219 10:41:48.242323 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:41:48 crc kubenswrapper[4873]: I0219 10:41:48.242380 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f" gracePeriod=600 Feb 19 10:41:49 crc kubenswrapper[4873]: I0219 10:41:49.097919 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f" exitCode=0 Feb 19 10:41:49 crc kubenswrapper[4873]: I0219 10:41:49.098088 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f"} Feb 19 10:41:49 crc kubenswrapper[4873]: I0219 10:41:49.098471 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4"} Feb 19 10:41:49 crc kubenswrapper[4873]: I0219 10:41:49.098490 4873 scope.go:117] "RemoveContainer" containerID="626a71f4870e297e5c29d320555202aa31c88919871bbf2657a131f7539b445b" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.419699 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:42:54 crc kubenswrapper[4873]: E0219 10:42:54.421201 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="extract-content" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.421218 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="extract-content" Feb 19 10:42:54 crc kubenswrapper[4873]: E0219 10:42:54.421232 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="extract-utilities" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.421241 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="extract-utilities" Feb 19 10:42:54 crc kubenswrapper[4873]: E0219 10:42:54.421307 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="registry-server" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.421318 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="registry-server" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.422257 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4c2b0-7e7a-4e61-b63a-ee5b16d32ebd" containerName="registry-server" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.428640 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.479860 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.545213 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.545323 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.545473 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94h8\" (UniqueName: \"kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.647653 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.647745 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.648344 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.648479 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.648513 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94h8\" (UniqueName: \"kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.671562 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94h8\" (UniqueName: \"kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8\") pod \"community-operators-l45tw\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:54 crc kubenswrapper[4873]: I0219 10:42:54.768164 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:42:55 crc kubenswrapper[4873]: I0219 10:42:55.327540 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:42:55 crc kubenswrapper[4873]: I0219 10:42:55.721418 4873 generic.go:334] "Generic (PLEG): container finished" podID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerID="439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab" exitCode=0 Feb 19 10:42:55 crc kubenswrapper[4873]: I0219 10:42:55.721497 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerDied","Data":"439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab"} Feb 19 10:42:55 crc kubenswrapper[4873]: I0219 10:42:55.721753 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerStarted","Data":"d8a69facd98b183ff4ff2a4293c6b027e750acad29ea74e2ec4a6d345726c9cf"} Feb 19 10:42:55 crc kubenswrapper[4873]: I0219 10:42:55.724252 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:42:56 crc kubenswrapper[4873]: I0219 10:42:56.736124 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerStarted","Data":"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f"} Feb 19 10:43:00 crc kubenswrapper[4873]: I0219 10:43:00.781185 4873 generic.go:334] "Generic (PLEG): container finished" podID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerID="d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f" exitCode=0 Feb 19 10:43:00 crc kubenswrapper[4873]: I0219 10:43:00.781270 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerDied","Data":"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f"} Feb 19 10:43:01 crc kubenswrapper[4873]: I0219 10:43:01.796688 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerStarted","Data":"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d"} Feb 19 10:43:01 crc kubenswrapper[4873]: I0219 10:43:01.826859 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l45tw" podStartSLOduration=2.263772351 podStartE2EDuration="7.826840958s" podCreationTimestamp="2026-02-19 10:42:54 +0000 UTC" firstStartedPulling="2026-02-19 10:42:55.723947228 +0000 UTC m=+3485.013378866" lastFinishedPulling="2026-02-19 10:43:01.287015825 +0000 UTC m=+3490.576447473" observedRunningTime="2026-02-19 10:43:01.822985591 +0000 UTC m=+3491.112417229" watchObservedRunningTime="2026-02-19 10:43:01.826840958 +0000 UTC m=+3491.116272616" Feb 19 10:43:04 crc kubenswrapper[4873]: I0219 10:43:04.769428 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:04 crc kubenswrapper[4873]: I0219 10:43:04.769756 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:04 crc kubenswrapper[4873]: I0219 10:43:04.814616 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:14 crc kubenswrapper[4873]: I0219 10:43:14.818051 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:14 crc kubenswrapper[4873]: I0219 10:43:14.875377 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:43:14 crc kubenswrapper[4873]: I0219 10:43:14.953264 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l45tw" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="registry-server" containerID="cri-o://50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d" gracePeriod=2 Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.517174 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.603152 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content\") pod \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.603215 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities\") pod \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.603306 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94h8\" (UniqueName: \"kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8\") pod \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\" (UID: \"b9bd624b-d288-4fc4-a24c-1e3283b10bf6\") " Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.605063 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities" (OuterVolumeSpecName: "utilities") pod "b9bd624b-d288-4fc4-a24c-1e3283b10bf6" (UID: "b9bd624b-d288-4fc4-a24c-1e3283b10bf6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.611147 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8" (OuterVolumeSpecName: "kube-api-access-c94h8") pod "b9bd624b-d288-4fc4-a24c-1e3283b10bf6" (UID: "b9bd624b-d288-4fc4-a24c-1e3283b10bf6"). InnerVolumeSpecName "kube-api-access-c94h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.672464 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bd624b-d288-4fc4-a24c-1e3283b10bf6" (UID: "b9bd624b-d288-4fc4-a24c-1e3283b10bf6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.705507 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.705554 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.705571 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94h8\" (UniqueName: \"kubernetes.io/projected/b9bd624b-d288-4fc4-a24c-1e3283b10bf6-kube-api-access-c94h8\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.967431 4873 generic.go:334] "Generic (PLEG): container finished" podID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerID="50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d" exitCode=0 Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.967522 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerDied","Data":"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d"} Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.967570 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l45tw" event={"ID":"b9bd624b-d288-4fc4-a24c-1e3283b10bf6","Type":"ContainerDied","Data":"d8a69facd98b183ff4ff2a4293c6b027e750acad29ea74e2ec4a6d345726c9cf"} Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.967600 4873 scope.go:117] "RemoveContainer" containerID="50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d" Feb 19 10:43:15 crc kubenswrapper[4873]: I0219 10:43:15.967890 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l45tw" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.009888 4873 scope.go:117] "RemoveContainer" containerID="d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.017702 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.032066 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l45tw"] Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.037449 4873 scope.go:117] "RemoveContainer" containerID="439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.087505 4873 scope.go:117] "RemoveContainer" containerID="50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d" Feb 19 10:43:16 crc kubenswrapper[4873]: E0219 10:43:16.088980 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d\": container with ID starting with 50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d not found: ID does not exist" containerID="50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.089056 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d"} err="failed to get container status \"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d\": rpc error: code = NotFound desc = could not find container \"50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d\": container with ID starting with 50764a4aca3b93a56a6dd657a34c269f0c8b72265385e7bce385f947dbb29c8d not found: ID does not exist" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.089121 4873 scope.go:117] "RemoveContainer" containerID="d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f" Feb 19 10:43:16 crc kubenswrapper[4873]: E0219 10:43:16.089868 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f\": container with ID starting with d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f not found: ID does not exist" containerID="d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.089916 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f"} err="failed to get container status \"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f\": rpc error: code = NotFound desc = could not find container \"d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f\": container with ID starting with d95bd0f4fa462c66a1498316d7b5c05f7c7eab1217db9c00ff06c38a64b9e64f not found: ID does not exist" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.089957 4873 scope.go:117] "RemoveContainer" containerID="439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab" Feb 19 10:43:16 crc kubenswrapper[4873]: E0219 10:43:16.090419 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab\": container with ID starting with 439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab not found: ID does not exist" containerID="439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab" Feb 19 10:43:16 crc kubenswrapper[4873]: I0219 10:43:16.090456 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab"} err="failed to get container status \"439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab\": rpc error: code = NotFound desc = could not find container \"439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab\": container with ID starting with 439235b1c3b4e94efa982e6a0cecbae46412fc2a2189779e92351095c7281eab not found: ID does not exist" Feb 19 10:43:17 crc kubenswrapper[4873]: I0219 10:43:17.498027 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" path="/var/lib/kubelet/pods/b9bd624b-d288-4fc4-a24c-1e3283b10bf6/volumes" Feb 19 10:43:48 crc kubenswrapper[4873]: I0219 10:43:48.240935 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:43:48 crc kubenswrapper[4873]: I0219 10:43:48.241576 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:18 crc kubenswrapper[4873]: I0219 10:44:18.241021 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:44:18 crc kubenswrapper[4873]: I0219 10:44:18.241591 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:48 crc kubenswrapper[4873]: I0219 10:44:48.240990 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:44:48 crc kubenswrapper[4873]: I0219 10:44:48.241801 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:48 crc kubenswrapper[4873]: I0219 10:44:48.242074 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:44:48 crc kubenswrapper[4873]: I0219 10:44:48.243199 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:44:48 crc kubenswrapper[4873]: I0219 10:44:48.243287 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" gracePeriod=600 Feb 19 10:44:48 crc kubenswrapper[4873]: E0219 10:44:48.368264 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:44:49 crc kubenswrapper[4873]: I0219 10:44:49.212979 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" exitCode=0 Feb 19 10:44:49 crc kubenswrapper[4873]: I0219 10:44:49.213085 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4"} Feb 19 10:44:49 crc kubenswrapper[4873]: I0219 10:44:49.213228 4873 scope.go:117] "RemoveContainer" containerID="0d157f8531ba54c5f5368e05cf7f1a865a7d6835f71ea27ae242a81b66903a7f" Feb 19 10:44:49 crc kubenswrapper[4873]: I0219 10:44:49.215212 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:44:49 crc kubenswrapper[4873]: E0219 10:44:49.216214 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.165452 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6"] Feb 19 10:45:00 crc kubenswrapper[4873]: E0219 10:45:00.166665 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.166684 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4873]: E0219 10:45:00.166701 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.166709 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4873]: E0219 10:45:00.166761 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.166770 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.167016 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bd624b-d288-4fc4-a24c-1e3283b10bf6" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.167846 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.175517 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6"] Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.182604 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.182772 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.301889 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.302538 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.302914 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twjnt\" (UniqueName: \"kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.404161 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.404303 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.404398 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twjnt\" (UniqueName: \"kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.405242 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.415226 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.424910 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twjnt\" (UniqueName: \"kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt\") pod \"collect-profiles-29524965-7h5c6\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.505272 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:00 crc kubenswrapper[4873]: I0219 10:45:00.964826 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6"] Feb 19 10:45:01 crc kubenswrapper[4873]: I0219 10:45:01.332521 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" event={"ID":"e3672337-92bc-4e97-9c9e-c0a7e7cd284b","Type":"ContainerStarted","Data":"54b95d1d4eacbeaa7320e5a5833d0056b13e15ee90dc6b62a2553c6f88d2fff8"} Feb 19 10:45:01 crc kubenswrapper[4873]: I0219 10:45:01.334218 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" event={"ID":"e3672337-92bc-4e97-9c9e-c0a7e7cd284b","Type":"ContainerStarted","Data":"a7e7997adc2bfeb5ccc5b987d6d0f9aa1b9bafe6a9bab781afe26c452dd4c597"} Feb 19 10:45:01 crc kubenswrapper[4873]: I0219 10:45:01.360503 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" podStartSLOduration=1.360468367 podStartE2EDuration="1.360468367s" podCreationTimestamp="2026-02-19 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:45:01.351684837 +0000 UTC m=+3610.641116515" watchObservedRunningTime="2026-02-19 10:45:01.360468367 +0000 UTC m=+3610.649900005" Feb 19 10:45:02 crc kubenswrapper[4873]: I0219 10:45:02.342426 4873 generic.go:334] "Generic (PLEG): container finished" podID="e3672337-92bc-4e97-9c9e-c0a7e7cd284b" containerID="54b95d1d4eacbeaa7320e5a5833d0056b13e15ee90dc6b62a2553c6f88d2fff8" exitCode=0 Feb 19 10:45:02 crc kubenswrapper[4873]: I0219 10:45:02.342554 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" event={"ID":"e3672337-92bc-4e97-9c9e-c0a7e7cd284b","Type":"ContainerDied","Data":"54b95d1d4eacbeaa7320e5a5833d0056b13e15ee90dc6b62a2553c6f88d2fff8"} Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.485943 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:45:03 crc kubenswrapper[4873]: E0219 10:45:03.486513 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.681292 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.883960 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume\") pod \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.884249 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume\") pod \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.884398 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twjnt\" (UniqueName: \"kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt\") pod \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\" (UID: \"e3672337-92bc-4e97-9c9e-c0a7e7cd284b\") " Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.885433 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3672337-92bc-4e97-9c9e-c0a7e7cd284b" (UID: "e3672337-92bc-4e97-9c9e-c0a7e7cd284b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.891246 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3672337-92bc-4e97-9c9e-c0a7e7cd284b" (UID: "e3672337-92bc-4e97-9c9e-c0a7e7cd284b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.892440 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt" (OuterVolumeSpecName: "kube-api-access-twjnt") pod "e3672337-92bc-4e97-9c9e-c0a7e7cd284b" (UID: "e3672337-92bc-4e97-9c9e-c0a7e7cd284b"). InnerVolumeSpecName "kube-api-access-twjnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.986732 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.986767 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4873]: I0219 10:45:03.986777 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twjnt\" (UniqueName: \"kubernetes.io/projected/e3672337-92bc-4e97-9c9e-c0a7e7cd284b-kube-api-access-twjnt\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:04 crc kubenswrapper[4873]: I0219 10:45:04.364803 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" event={"ID":"e3672337-92bc-4e97-9c9e-c0a7e7cd284b","Type":"ContainerDied","Data":"a7e7997adc2bfeb5ccc5b987d6d0f9aa1b9bafe6a9bab781afe26c452dd4c597"} Feb 19 10:45:04 crc kubenswrapper[4873]: I0219 10:45:04.364880 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7e7997adc2bfeb5ccc5b987d6d0f9aa1b9bafe6a9bab781afe26c452dd4c597" Feb 19 10:45:04 crc kubenswrapper[4873]: I0219 10:45:04.364998 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6" Feb 19 10:45:04 crc kubenswrapper[4873]: I0219 10:45:04.767091 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm"] Feb 19 10:45:04 crc kubenswrapper[4873]: I0219 10:45:04.776924 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-796dm"] Feb 19 10:45:05 crc kubenswrapper[4873]: I0219 10:45:05.504593 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890a4af6-c400-4f2c-a387-edcbbc821b11" path="/var/lib/kubelet/pods/890a4af6-c400-4f2c-a387-edcbbc821b11/volumes" Feb 19 10:45:18 crc kubenswrapper[4873]: I0219 10:45:18.485370 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:45:18 crc kubenswrapper[4873]: E0219 10:45:18.486353 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:45:32 crc kubenswrapper[4873]: I0219 10:45:32.484768 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:45:32 crc kubenswrapper[4873]: E0219 10:45:32.486744 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:45:47 crc kubenswrapper[4873]: I0219 10:45:47.484519 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:45:47 crc kubenswrapper[4873]: E0219 10:45:47.485317 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:46:00 crc kubenswrapper[4873]: I0219 10:46:00.484605 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:46:00 crc kubenswrapper[4873]: E0219 10:46:00.485408 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:46:02 crc kubenswrapper[4873]: I0219 10:46:02.627239 4873 scope.go:117] "RemoveContainer" containerID="2ea87556ea1e2777f378238131c83ccd55a7eac5410c13097afbd46ee33f0929" Feb 19 10:46:11 crc kubenswrapper[4873]: I0219 10:46:11.490415 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:46:11 crc kubenswrapper[4873]: E0219 10:46:11.493375 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:46:23 crc kubenswrapper[4873]: I0219 10:46:23.484187 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:46:23 crc kubenswrapper[4873]: E0219 10:46:23.485033 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:46:34 crc kubenswrapper[4873]: I0219 10:46:34.483884 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:46:34 crc kubenswrapper[4873]: E0219 10:46:34.484740 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:46:48 crc kubenswrapper[4873]: I0219 10:46:48.485044 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:46:48 crc kubenswrapper[4873]: E0219 10:46:48.485876 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:03 crc kubenswrapper[4873]: I0219 10:47:03.484555 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:47:03 crc kubenswrapper[4873]: E0219 10:47:03.485499 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:17 crc kubenswrapper[4873]: I0219 10:47:17.484476 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:47:17 crc kubenswrapper[4873]: E0219 10:47:17.485364 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:29 crc kubenswrapper[4873]: I0219 10:47:29.484525 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:47:29 crc kubenswrapper[4873]: E0219 10:47:29.485362 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:44 crc kubenswrapper[4873]: I0219 10:47:44.493326 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:47:44 crc kubenswrapper[4873]: E0219 10:47:44.495785 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.206288 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:47:51 crc kubenswrapper[4873]: E0219 10:47:51.207465 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3672337-92bc-4e97-9c9e-c0a7e7cd284b" containerName="collect-profiles" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.207486 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3672337-92bc-4e97-9c9e-c0a7e7cd284b" containerName="collect-profiles" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.207786 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3672337-92bc-4e97-9c9e-c0a7e7cd284b" containerName="collect-profiles" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.209744 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.227155 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.327770 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.327995 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.328028 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vkd\" (UniqueName: \"kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.430445 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.430642 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.430675 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vkd\" (UniqueName: \"kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.431118 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.431209 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.456257 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vkd\" (UniqueName: \"kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd\") pod \"certified-operators-gn8wh\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:51 crc kubenswrapper[4873]: I0219 10:47:51.540793 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:47:52 crc kubenswrapper[4873]: I0219 10:47:52.183453 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:47:52 crc kubenswrapper[4873]: I0219 10:47:52.231943 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerStarted","Data":"15e6bed6cf4319505dcff2fd62cf8014563b596799be90375f21165a31ca87b7"} Feb 19 10:47:53 crc kubenswrapper[4873]: I0219 10:47:53.248256 4873 generic.go:334] "Generic (PLEG): container finished" podID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerID="d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d" exitCode=0 Feb 19 10:47:53 crc kubenswrapper[4873]: I0219 10:47:53.248353 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerDied","Data":"d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d"} Feb 19 10:47:55 crc kubenswrapper[4873]: I0219 10:47:55.265923 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerStarted","Data":"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f"} Feb 19 10:47:56 crc kubenswrapper[4873]: I0219 10:47:56.484459 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:47:56 crc kubenswrapper[4873]: E0219 10:47:56.484967 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:47:59 crc kubenswrapper[4873]: I0219 10:47:59.318956 4873 generic.go:334] "Generic (PLEG): container finished" podID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerID="43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f" exitCode=0 Feb 19 10:47:59 crc kubenswrapper[4873]: I0219 10:47:59.318996 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerDied","Data":"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f"} Feb 19 10:47:59 crc kubenswrapper[4873]: I0219 10:47:59.323217 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:48:00 crc kubenswrapper[4873]: I0219 10:48:00.346805 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerStarted","Data":"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161"} Feb 19 10:48:00 crc kubenswrapper[4873]: I0219 10:48:00.373522 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gn8wh" podStartSLOduration=2.767193308 podStartE2EDuration="9.37350121s" podCreationTimestamp="2026-02-19 10:47:51 +0000 UTC" firstStartedPulling="2026-02-19 10:47:53.251381586 +0000 UTC m=+3782.540813264" lastFinishedPulling="2026-02-19 10:47:59.857689538 +0000 UTC m=+3789.147121166" observedRunningTime="2026-02-19 10:48:00.364660529 +0000 UTC m=+3789.654092167" watchObservedRunningTime="2026-02-19 10:48:00.37350121 +0000 UTC m=+3789.662932848" Feb 19 10:48:01 crc kubenswrapper[4873]: I0219 10:48:01.541596 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:01 crc kubenswrapper[4873]: I0219 10:48:01.541981 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:02 crc kubenswrapper[4873]: I0219 10:48:02.586520 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gn8wh" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="registry-server" probeResult="failure" output=< Feb 19 10:48:02 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:48:02 crc kubenswrapper[4873]: > Feb 19 10:48:11 crc kubenswrapper[4873]: I0219 10:48:11.491014 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:48:11 crc kubenswrapper[4873]: E0219 10:48:11.491767 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:48:11 crc kubenswrapper[4873]: I0219 10:48:11.597077 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:11 crc kubenswrapper[4873]: I0219 10:48:11.647956 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:11 crc kubenswrapper[4873]: I0219 10:48:11.839895 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:48:13 crc kubenswrapper[4873]: I0219 10:48:13.473401 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gn8wh" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="registry-server" containerID="cri-o://ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161" gracePeriod=2 Feb 19 10:48:13 crc kubenswrapper[4873]: I0219 10:48:13.999121 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.120717 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26vkd\" (UniqueName: \"kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd\") pod \"a65669cf-686e-4ae9-a210-66ae759bfe37\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.121280 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities\") pod \"a65669cf-686e-4ae9-a210-66ae759bfe37\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.121328 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content\") pod \"a65669cf-686e-4ae9-a210-66ae759bfe37\" (UID: \"a65669cf-686e-4ae9-a210-66ae759bfe37\") " Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.122659 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities" (OuterVolumeSpecName: "utilities") pod "a65669cf-686e-4ae9-a210-66ae759bfe37" (UID: "a65669cf-686e-4ae9-a210-66ae759bfe37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.128380 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd" (OuterVolumeSpecName: "kube-api-access-26vkd") pod "a65669cf-686e-4ae9-a210-66ae759bfe37" (UID: "a65669cf-686e-4ae9-a210-66ae759bfe37"). InnerVolumeSpecName "kube-api-access-26vkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.189753 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a65669cf-686e-4ae9-a210-66ae759bfe37" (UID: "a65669cf-686e-4ae9-a210-66ae759bfe37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.223866 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26vkd\" (UniqueName: \"kubernetes.io/projected/a65669cf-686e-4ae9-a210-66ae759bfe37-kube-api-access-26vkd\") on node \"crc\" DevicePath \"\"" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.223908 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.223918 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65669cf-686e-4ae9-a210-66ae759bfe37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.513428 4873 generic.go:334] "Generic (PLEG): container finished" podID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerID="ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161" exitCode=0 Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.513495 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerDied","Data":"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161"} Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.513559 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8wh" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.513583 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8wh" event={"ID":"a65669cf-686e-4ae9-a210-66ae759bfe37","Type":"ContainerDied","Data":"15e6bed6cf4319505dcff2fd62cf8014563b596799be90375f21165a31ca87b7"} Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.513610 4873 scope.go:117] "RemoveContainer" containerID="ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.549538 4873 scope.go:117] "RemoveContainer" containerID="43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.551039 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.561046 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gn8wh"] Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.570642 4873 scope.go:117] "RemoveContainer" containerID="d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.622438 4873 scope.go:117] "RemoveContainer" containerID="ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161" Feb 19 10:48:14 crc kubenswrapper[4873]: E0219 10:48:14.622972 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161\": container with ID starting with ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161 not found: ID does not exist" containerID="ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.623020 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161"} err="failed to get container status \"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161\": rpc error: code = NotFound desc = could not find container \"ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161\": container with ID starting with ebe44c5d8bed094f0deb2d3af386d2b9793282994e7d65f3a10ce5623c780161 not found: ID does not exist" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.623048 4873 scope.go:117] "RemoveContainer" containerID="43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f" Feb 19 10:48:14 crc kubenswrapper[4873]: E0219 10:48:14.623470 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f\": container with ID starting with 43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f not found: ID does not exist" containerID="43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.623492 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f"} err="failed to get container status \"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f\": rpc error: code = NotFound desc = could not find container \"43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f\": container with ID starting with 43e61c3a28ba9a4e810156c46101fe16c402171370482f2ea90d30248e76b25f not found: ID does not exist" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.623504 4873 scope.go:117] "RemoveContainer" containerID="d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d" Feb 19 10:48:14 crc kubenswrapper[4873]: E0219 10:48:14.623934 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d\": container with ID starting with d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d not found: ID does not exist" containerID="d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d" Feb 19 10:48:14 crc kubenswrapper[4873]: I0219 10:48:14.624012 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d"} err="failed to get container status \"d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d\": rpc error: code = NotFound desc = could not find container \"d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d\": container with ID starting with d0bcc2985ae3863253dbcfc0804c289bd2f9b9d3b0af599bfe995ac18937285d not found: ID does not exist" Feb 19 10:48:15 crc kubenswrapper[4873]: I0219 10:48:15.495797 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" path="/var/lib/kubelet/pods/a65669cf-686e-4ae9-a210-66ae759bfe37/volumes" Feb 19 10:48:23 crc kubenswrapper[4873]: I0219 10:48:23.484300 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:48:23 crc kubenswrapper[4873]: E0219 10:48:23.485184 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:48:34 crc kubenswrapper[4873]: I0219 10:48:34.484529 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:48:34 crc kubenswrapper[4873]: E0219 10:48:34.485528 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:48:49 crc kubenswrapper[4873]: I0219 10:48:49.484753 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:48:49 crc kubenswrapper[4873]: E0219 10:48:49.485578 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:49:03 crc kubenswrapper[4873]: I0219 10:49:03.484114 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:49:03 crc kubenswrapper[4873]: E0219 10:49:03.485003 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:49:16 crc kubenswrapper[4873]: I0219 10:49:16.484068 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:49:16 crc kubenswrapper[4873]: E0219 10:49:16.484885 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:49:30 crc kubenswrapper[4873]: I0219 10:49:30.484679 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:49:30 crc kubenswrapper[4873]: E0219 10:49:30.485222 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:49:43 crc kubenswrapper[4873]: I0219 10:49:43.485286 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:49:43 crc kubenswrapper[4873]: E0219 10:49:43.486567 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:49:57 crc kubenswrapper[4873]: I0219 10:49:57.484602 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:49:57 crc kubenswrapper[4873]: I0219 10:49:57.935448 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9"} Feb 19 10:52:18 crc kubenswrapper[4873]: I0219 10:52:18.240161 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:52:18 crc kubenswrapper[4873]: I0219 10:52:18.240931 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.458340 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:25 crc kubenswrapper[4873]: E0219 10:52:25.459389 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="extract-content" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.459405 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="extract-content" Feb 19 10:52:25 crc kubenswrapper[4873]: E0219 10:52:25.459444 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="extract-utilities" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.459454 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="extract-utilities" Feb 19 10:52:25 crc kubenswrapper[4873]: E0219 10:52:25.459497 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="registry-server" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.459506 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="registry-server" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.459741 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65669cf-686e-4ae9-a210-66ae759bfe37" containerName="registry-server" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.461714 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.502318 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.601516 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.601656 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqm5h\" (UniqueName: \"kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.601874 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.703278 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.703597 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqm5h\" (UniqueName: \"kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.703766 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.703778 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.704172 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.726954 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqm5h\" (UniqueName: \"kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h\") pod \"redhat-operators-r78w4\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:25 crc kubenswrapper[4873]: I0219 10:52:25.795847 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:26 crc kubenswrapper[4873]: I0219 10:52:26.356528 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:26 crc kubenswrapper[4873]: I0219 10:52:26.714270 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerStarted","Data":"6a0e3dd90bd9baa2e4af30ac07e6618b6ab8119c61a7e1c9c34b58b29982ab50"} Feb 19 10:52:27 crc kubenswrapper[4873]: I0219 10:52:27.728329 4873 generic.go:334] "Generic (PLEG): container finished" podID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerID="27af10342a7e1ad2491360a91681d932dd63d524dbab6ca3add1793c15d831ca" exitCode=0 Feb 19 10:52:27 crc kubenswrapper[4873]: I0219 10:52:27.728411 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerDied","Data":"27af10342a7e1ad2491360a91681d932dd63d524dbab6ca3add1793c15d831ca"} Feb 19 10:52:29 crc kubenswrapper[4873]: I0219 10:52:29.760592 4873 generic.go:334] "Generic (PLEG): container finished" podID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerID="2e2552b2c4ecf36a5cf4a36c966c229ea32d862bd7b139e3133ee0c86e6b974d" exitCode=0 Feb 19 10:52:29 crc kubenswrapper[4873]: I0219 10:52:29.761008 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerDied","Data":"2e2552b2c4ecf36a5cf4a36c966c229ea32d862bd7b139e3133ee0c86e6b974d"} Feb 19 10:52:30 crc kubenswrapper[4873]: I0219 10:52:30.773278 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerStarted","Data":"c7b53d3544f9dd7da88afeb6fa02fe407f19ea68d37f09767e6132e62b14b454"} Feb 19 10:52:35 crc kubenswrapper[4873]: I0219 10:52:35.797052 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:35 crc kubenswrapper[4873]: I0219 10:52:35.797663 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.338957 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r78w4" podStartSLOduration=8.843319756 podStartE2EDuration="11.338935225s" podCreationTimestamp="2026-02-19 10:52:25 +0000 UTC" firstStartedPulling="2026-02-19 10:52:27.731766128 +0000 UTC m=+4057.021197766" lastFinishedPulling="2026-02-19 10:52:30.227381567 +0000 UTC m=+4059.516813235" observedRunningTime="2026-02-19 10:52:30.798677552 +0000 UTC m=+4060.088109230" watchObservedRunningTime="2026-02-19 10:52:36.338935225 +0000 UTC m=+4065.628366863" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.353063 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.355991 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.363767 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.546323 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbvt\" (UniqueName: \"kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.546400 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.547085 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.649069 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.649190 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbvt\" (UniqueName: \"kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.649236 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.649862 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.651174 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:36 crc kubenswrapper[4873]: I0219 10:52:36.872309 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r78w4" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="registry-server" probeResult="failure" output=< Feb 19 10:52:36 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 10:52:36 crc kubenswrapper[4873]: > Feb 19 10:52:37 crc kubenswrapper[4873]: I0219 10:52:37.087052 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbvt\" (UniqueName: \"kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt\") pod \"redhat-marketplace-xxf58\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:37 crc kubenswrapper[4873]: I0219 10:52:37.281176 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:37 crc kubenswrapper[4873]: I0219 10:52:37.823693 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:37 crc kubenswrapper[4873]: I0219 10:52:37.869350 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerStarted","Data":"194a5c8ec3965dc48c4446e3c729e96059f10e97ac315dc79d6033eed5b46683"} Feb 19 10:52:38 crc kubenswrapper[4873]: I0219 10:52:38.881735 4873 generic.go:334] "Generic (PLEG): container finished" podID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerID="a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be" exitCode=0 Feb 19 10:52:38 crc kubenswrapper[4873]: I0219 10:52:38.881836 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerDied","Data":"a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be"} Feb 19 10:52:40 crc kubenswrapper[4873]: I0219 10:52:40.902908 4873 generic.go:334] "Generic (PLEG): container finished" podID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerID="9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f" exitCode=0 Feb 19 10:52:40 crc kubenswrapper[4873]: I0219 10:52:40.903505 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerDied","Data":"9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f"} Feb 19 10:52:42 crc kubenswrapper[4873]: I0219 10:52:42.946762 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerStarted","Data":"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4"} Feb 19 10:52:42 crc kubenswrapper[4873]: I0219 10:52:42.970471 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxf58" podStartSLOduration=4.018271224 podStartE2EDuration="6.970450482s" podCreationTimestamp="2026-02-19 10:52:36 +0000 UTC" firstStartedPulling="2026-02-19 10:52:38.883913969 +0000 UTC m=+4068.173345607" lastFinishedPulling="2026-02-19 10:52:41.836093227 +0000 UTC m=+4071.125524865" observedRunningTime="2026-02-19 10:52:42.966235377 +0000 UTC m=+4072.255667025" watchObservedRunningTime="2026-02-19 10:52:42.970450482 +0000 UTC m=+4072.259882130" Feb 19 10:52:45 crc kubenswrapper[4873]: I0219 10:52:45.851391 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:45 crc kubenswrapper[4873]: I0219 10:52:45.905174 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:46 crc kubenswrapper[4873]: I0219 10:52:46.092691 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:46 crc kubenswrapper[4873]: I0219 10:52:46.980146 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r78w4" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="registry-server" containerID="cri-o://c7b53d3544f9dd7da88afeb6fa02fe407f19ea68d37f09767e6132e62b14b454" gracePeriod=2 Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.281349 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.282440 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.951980 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.995466 4873 generic.go:334] "Generic (PLEG): container finished" podID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerID="c7b53d3544f9dd7da88afeb6fa02fe407f19ea68d37f09767e6132e62b14b454" exitCode=0 Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.995611 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerDied","Data":"c7b53d3544f9dd7da88afeb6fa02fe407f19ea68d37f09767e6132e62b14b454"} Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.995655 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r78w4" event={"ID":"0d91352c-7639-49f0-baf7-bd343bb59c42","Type":"ContainerDied","Data":"6a0e3dd90bd9baa2e4af30ac07e6618b6ab8119c61a7e1c9c34b58b29982ab50"} Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:47.995681 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0e3dd90bd9baa2e4af30ac07e6618b6ab8119c61a7e1c9c34b58b29982ab50" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.003072 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.040326 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.096474 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content\") pod \"0d91352c-7639-49f0-baf7-bd343bb59c42\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.096636 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities\") pod \"0d91352c-7639-49f0-baf7-bd343bb59c42\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.098000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities" (OuterVolumeSpecName: "utilities") pod "0d91352c-7639-49f0-baf7-bd343bb59c42" (UID: "0d91352c-7639-49f0-baf7-bd343bb59c42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.098079 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqm5h\" (UniqueName: \"kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h\") pod \"0d91352c-7639-49f0-baf7-bd343bb59c42\" (UID: \"0d91352c-7639-49f0-baf7-bd343bb59c42\") " Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.099047 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.104942 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h" (OuterVolumeSpecName: "kube-api-access-vqm5h") pod "0d91352c-7639-49f0-baf7-bd343bb59c42" (UID: "0d91352c-7639-49f0-baf7-bd343bb59c42"). InnerVolumeSpecName "kube-api-access-vqm5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.200910 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqm5h\" (UniqueName: \"kubernetes.io/projected/0d91352c-7639-49f0-baf7-bd343bb59c42-kube-api-access-vqm5h\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.235878 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d91352c-7639-49f0-baf7-bd343bb59c42" (UID: "0d91352c-7639-49f0-baf7-bd343bb59c42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.240506 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.240567 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.302609 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d91352c-7639-49f0-baf7-bd343bb59c42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:48 crc kubenswrapper[4873]: I0219 10:52:48.918040 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:49 crc kubenswrapper[4873]: I0219 10:52:49.003628 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r78w4" Feb 19 10:52:49 crc kubenswrapper[4873]: I0219 10:52:49.056355 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:49 crc kubenswrapper[4873]: I0219 10:52:49.066126 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r78w4"] Feb 19 10:52:49 crc kubenswrapper[4873]: I0219 10:52:49.494827 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" path="/var/lib/kubelet/pods/0d91352c-7639-49f0-baf7-bd343bb59c42/volumes" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.011471 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxf58" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="registry-server" containerID="cri-o://6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4" gracePeriod=2 Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.530856 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.649969 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities\") pod \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.650050 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbvt\" (UniqueName: \"kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt\") pod \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.650124 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content\") pod \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\" (UID: \"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a\") " Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.651000 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities" (OuterVolumeSpecName: "utilities") pod "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" (UID: "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.658123 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt" (OuterVolumeSpecName: "kube-api-access-2hbvt") pod "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" (UID: "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a"). InnerVolumeSpecName "kube-api-access-2hbvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.752664 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.752693 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbvt\" (UniqueName: \"kubernetes.io/projected/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-kube-api-access-2hbvt\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.792947 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" (UID: "8f4dc211-905d-41b4-bb4a-3f7a61ddc43a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:50 crc kubenswrapper[4873]: I0219 10:52:50.854188 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.023649 4873 generic.go:334] "Generic (PLEG): container finished" podID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerID="6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4" exitCode=0 Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.023691 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerDied","Data":"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4"} Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.023717 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxf58" event={"ID":"8f4dc211-905d-41b4-bb4a-3f7a61ddc43a","Type":"ContainerDied","Data":"194a5c8ec3965dc48c4446e3c729e96059f10e97ac315dc79d6033eed5b46683"} Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.023734 4873 scope.go:117] "RemoveContainer" containerID="6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.023751 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxf58" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.054982 4873 scope.go:117] "RemoveContainer" containerID="9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.060916 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.069757 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxf58"] Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.078904 4873 scope.go:117] "RemoveContainer" containerID="a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.134657 4873 scope.go:117] "RemoveContainer" containerID="6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4" Feb 19 10:52:51 crc kubenswrapper[4873]: E0219 10:52:51.135293 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4\": container with ID starting with 6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4 not found: ID does not exist" containerID="6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.135350 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4"} err="failed to get container status \"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4\": rpc error: code = NotFound desc = could not find container \"6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4\": container with ID starting with 6f2919947fabd994e41a4657c0c39dc555238ed077e615ca386b18a394515ac4 not found: ID does not exist" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.135386 4873 scope.go:117] "RemoveContainer" containerID="9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f" Feb 19 10:52:51 crc kubenswrapper[4873]: E0219 10:52:51.136574 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f\": container with ID starting with 9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f not found: ID does not exist" containerID="9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.136600 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f"} err="failed to get container status \"9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f\": rpc error: code = NotFound desc = could not find container \"9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f\": container with ID starting with 9576149c2a6edb3dd307caf70a679b5bcabf22e19412e9a41aab4e35d4aa0e9f not found: ID does not exist" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.136622 4873 scope.go:117] "RemoveContainer" containerID="a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be" Feb 19 10:52:51 crc kubenswrapper[4873]: E0219 10:52:51.136992 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be\": container with ID starting with a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be not found: ID does not exist" containerID="a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.137036 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be"} err="failed to get container status \"a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be\": rpc error: code = NotFound desc = could not find container \"a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be\": container with ID starting with a13f60368c0055f1e74315a69214cbe8f6e44e0d47ad4202b97e5f3d097b12be not found: ID does not exist" Feb 19 10:52:51 crc kubenswrapper[4873]: I0219 10:52:51.498090 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" path="/var/lib/kubelet/pods/8f4dc211-905d-41b4-bb4a-3f7a61ddc43a/volumes" Feb 19 10:53:18 crc kubenswrapper[4873]: I0219 10:53:18.241431 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:53:18 crc kubenswrapper[4873]: I0219 10:53:18.241964 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:53:18 crc kubenswrapper[4873]: I0219 10:53:18.242016 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:53:18 crc kubenswrapper[4873]: I0219 10:53:18.242838 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:53:18 crc kubenswrapper[4873]: I0219 10:53:18.243009 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9" gracePeriod=600 Feb 19 10:53:19 crc kubenswrapper[4873]: I0219 10:53:19.283248 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9" exitCode=0 Feb 19 10:53:19 crc kubenswrapper[4873]: I0219 10:53:19.283341 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9"} Feb 19 10:53:19 crc kubenswrapper[4873]: I0219 10:53:19.283529 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de"} Feb 19 10:53:19 crc kubenswrapper[4873]: I0219 10:53:19.283550 4873 scope.go:117] "RemoveContainer" containerID="ea511bbcc48093928a25a3a4b468255a7f9fbafc2550fa8fd49de20ec45c97a4" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.393257 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394396 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="extract-content" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394414 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="extract-content" Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394430 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394436 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394446 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="extract-content" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394452 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="extract-content" Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394462 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394468 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394491 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="extract-utilities" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394498 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="extract-utilities" Feb 19 10:53:43 crc kubenswrapper[4873]: E0219 10:53:43.394522 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="extract-utilities" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394528 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="extract-utilities" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394705 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d91352c-7639-49f0-baf7-bd343bb59c42" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.394725 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4dc211-905d-41b4-bb4a-3f7a61ddc43a" containerName="registry-server" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.396355 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.412563 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.462799 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.462879 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.463125 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlv7g\" (UniqueName: \"kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.564904 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlv7g\" (UniqueName: \"kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.566181 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.566350 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.567046 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.567154 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.604870 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlv7g\" (UniqueName: \"kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g\") pod \"community-operators-gq42m\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:43 crc kubenswrapper[4873]: I0219 10:53:43.728457 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:44 crc kubenswrapper[4873]: I0219 10:53:44.273524 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:44 crc kubenswrapper[4873]: I0219 10:53:44.527582 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerStarted","Data":"00e9be978a5058cfb41fc6f67a24942030ba394962d3935402b0a652ff531cd7"} Feb 19 10:53:45 crc kubenswrapper[4873]: I0219 10:53:45.542687 4873 generic.go:334] "Generic (PLEG): container finished" podID="adb060ed-98a8-4d81-820a-8e2d26500534" containerID="67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc" exitCode=0 Feb 19 10:53:45 crc kubenswrapper[4873]: I0219 10:53:45.542923 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerDied","Data":"67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc"} Feb 19 10:53:45 crc kubenswrapper[4873]: I0219 10:53:45.545353 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:53:47 crc kubenswrapper[4873]: I0219 10:53:47.564057 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerStarted","Data":"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728"} Feb 19 10:53:48 crc kubenswrapper[4873]: I0219 10:53:48.575663 4873 generic.go:334] "Generic (PLEG): container finished" podID="adb060ed-98a8-4d81-820a-8e2d26500534" containerID="0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728" exitCode=0 Feb 19 10:53:48 crc kubenswrapper[4873]: I0219 10:53:48.575727 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerDied","Data":"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728"} Feb 19 10:53:49 crc kubenswrapper[4873]: I0219 10:53:49.590590 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerStarted","Data":"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e"} Feb 19 10:53:49 crc kubenswrapper[4873]: I0219 10:53:49.619983 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gq42m" podStartSLOduration=3.121397811 podStartE2EDuration="6.619965667s" podCreationTimestamp="2026-02-19 10:53:43 +0000 UTC" firstStartedPulling="2026-02-19 10:53:45.545148538 +0000 UTC m=+4134.834580166" lastFinishedPulling="2026-02-19 10:53:49.043716384 +0000 UTC m=+4138.333148022" observedRunningTime="2026-02-19 10:53:49.610680463 +0000 UTC m=+4138.900112111" watchObservedRunningTime="2026-02-19 10:53:49.619965667 +0000 UTC m=+4138.909397305" Feb 19 10:53:53 crc kubenswrapper[4873]: I0219 10:53:53.730057 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:53 crc kubenswrapper[4873]: I0219 10:53:53.730739 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:53 crc kubenswrapper[4873]: I0219 10:53:53.788321 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:54 crc kubenswrapper[4873]: I0219 10:53:54.698713 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:55 crc kubenswrapper[4873]: I0219 10:53:55.980648 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:56 crc kubenswrapper[4873]: I0219 10:53:56.658891 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gq42m" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="registry-server" containerID="cri-o://a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e" gracePeriod=2 Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.204989 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.282471 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities\") pod \"adb060ed-98a8-4d81-820a-8e2d26500534\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.282618 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content\") pod \"adb060ed-98a8-4d81-820a-8e2d26500534\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.282644 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlv7g\" (UniqueName: \"kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g\") pod \"adb060ed-98a8-4d81-820a-8e2d26500534\" (UID: \"adb060ed-98a8-4d81-820a-8e2d26500534\") " Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.283481 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities" (OuterVolumeSpecName: "utilities") pod "adb060ed-98a8-4d81-820a-8e2d26500534" (UID: "adb060ed-98a8-4d81-820a-8e2d26500534"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.297954 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g" (OuterVolumeSpecName: "kube-api-access-vlv7g") pod "adb060ed-98a8-4d81-820a-8e2d26500534" (UID: "adb060ed-98a8-4d81-820a-8e2d26500534"). InnerVolumeSpecName "kube-api-access-vlv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.348630 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adb060ed-98a8-4d81-820a-8e2d26500534" (UID: "adb060ed-98a8-4d81-820a-8e2d26500534"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.385508 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.385562 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlv7g\" (UniqueName: \"kubernetes.io/projected/adb060ed-98a8-4d81-820a-8e2d26500534-kube-api-access-vlv7g\") on node \"crc\" DevicePath \"\"" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.385578 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb060ed-98a8-4d81-820a-8e2d26500534-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.671494 4873 generic.go:334] "Generic (PLEG): container finished" podID="adb060ed-98a8-4d81-820a-8e2d26500534" containerID="a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e" exitCode=0 Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.671864 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerDied","Data":"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e"} Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.671898 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq42m" event={"ID":"adb060ed-98a8-4d81-820a-8e2d26500534","Type":"ContainerDied","Data":"00e9be978a5058cfb41fc6f67a24942030ba394962d3935402b0a652ff531cd7"} Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.671919 4873 scope.go:117] "RemoveContainer" containerID="a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.672115 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq42m" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.700200 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.703870 4873 scope.go:117] "RemoveContainer" containerID="0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.710524 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gq42m"] Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.730387 4873 scope.go:117] "RemoveContainer" containerID="67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.794036 4873 scope.go:117] "RemoveContainer" containerID="a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e" Feb 19 10:53:57 crc kubenswrapper[4873]: E0219 10:53:57.795528 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e\": container with ID starting with a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e not found: ID does not exist" containerID="a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.795932 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e"} err="failed to get container status \"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e\": rpc error: code = NotFound desc = could not find container \"a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e\": container with ID starting with a212603423738414e050cdf1f902d1187dfb145aa2651df5b68ad1f242b3b09e not found: ID does not exist" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.796057 4873 scope.go:117] "RemoveContainer" containerID="0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728" Feb 19 10:53:57 crc kubenswrapper[4873]: E0219 10:53:57.799981 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728\": container with ID starting with 0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728 not found: ID does not exist" containerID="0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.800035 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728"} err="failed to get container status \"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728\": rpc error: code = NotFound desc = could not find container \"0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728\": container with ID starting with 0f41fd460e2c60d591b7e1631be9e225edf447e4059e806496eb7983462df728 not found: ID does not exist" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.800065 4873 scope.go:117] "RemoveContainer" containerID="67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc" Feb 19 10:53:57 crc kubenswrapper[4873]: E0219 10:53:57.800519 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc\": container with ID starting with 67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc not found: ID does not exist" containerID="67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc" Feb 19 10:53:57 crc kubenswrapper[4873]: I0219 10:53:57.800568 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc"} err="failed to get container status \"67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc\": rpc error: code = NotFound desc = could not find container \"67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc\": container with ID starting with 67c60116f380dfee9023370eececf0d5081e88cdb702860b7f01a8ec1aef69cc not found: ID does not exist" Feb 19 10:53:59 crc kubenswrapper[4873]: I0219 10:53:59.496342 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" path="/var/lib/kubelet/pods/adb060ed-98a8-4d81-820a-8e2d26500534/volumes" Feb 19 10:55:48 crc kubenswrapper[4873]: I0219 10:55:48.239963 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:55:48 crc kubenswrapper[4873]: I0219 10:55:48.240670 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:18 crc kubenswrapper[4873]: I0219 10:56:18.240339 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:56:18 crc kubenswrapper[4873]: I0219 10:56:18.240895 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:48 crc kubenswrapper[4873]: I0219 10:56:48.240543 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:56:48 crc kubenswrapper[4873]: I0219 10:56:48.240994 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:48 crc kubenswrapper[4873]: I0219 10:56:48.241035 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 10:56:48 crc kubenswrapper[4873]: I0219 10:56:48.241837 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:56:48 crc kubenswrapper[4873]: I0219 10:56:48.241895 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" gracePeriod=600 Feb 19 10:56:48 crc kubenswrapper[4873]: E0219 10:56:48.424835 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:56:49 crc kubenswrapper[4873]: I0219 10:56:49.380333 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" exitCode=0 Feb 19 10:56:49 crc kubenswrapper[4873]: I0219 10:56:49.380376 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de"} Feb 19 10:56:49 crc kubenswrapper[4873]: I0219 10:56:49.380408 4873 scope.go:117] "RemoveContainer" containerID="3f289291e3d6f52c31a0d326462313d44367b683964d6ef342d209f000362ec9" Feb 19 10:56:49 crc kubenswrapper[4873]: I0219 10:56:49.381134 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:56:49 crc kubenswrapper[4873]: E0219 10:56:49.381547 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:57:00 crc kubenswrapper[4873]: I0219 10:57:00.484646 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:57:00 crc kubenswrapper[4873]: E0219 10:57:00.485448 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:57:13 crc kubenswrapper[4873]: I0219 10:57:13.484764 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:57:13 crc kubenswrapper[4873]: E0219 10:57:13.486997 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:57:24 crc kubenswrapper[4873]: I0219 10:57:24.484030 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:57:24 crc kubenswrapper[4873]: E0219 10:57:24.484874 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:57:38 crc kubenswrapper[4873]: I0219 10:57:38.484437 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:57:38 crc kubenswrapper[4873]: E0219 10:57:38.485246 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:57:53 crc kubenswrapper[4873]: I0219 10:57:53.484765 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:57:53 crc kubenswrapper[4873]: E0219 10:57:53.485814 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:58:06 crc kubenswrapper[4873]: I0219 10:58:06.484640 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:58:06 crc kubenswrapper[4873]: E0219 10:58:06.485398 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:58:18 crc kubenswrapper[4873]: I0219 10:58:18.485138 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:58:18 crc kubenswrapper[4873]: E0219 10:58:18.486082 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:58:31 crc kubenswrapper[4873]: I0219 10:58:31.493713 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:58:31 crc kubenswrapper[4873]: E0219 10:58:31.494455 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:58:46 crc kubenswrapper[4873]: I0219 10:58:46.485236 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:58:46 crc kubenswrapper[4873]: E0219 10:58:46.486746 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:59:01 crc kubenswrapper[4873]: I0219 10:59:01.495931 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:59:01 crc kubenswrapper[4873]: E0219 10:59:01.496872 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:59:03 crc kubenswrapper[4873]: I0219 10:59:03.011951 4873 scope.go:117] "RemoveContainer" containerID="2e2552b2c4ecf36a5cf4a36c966c229ea32d862bd7b139e3133ee0c86e6b974d" Feb 19 10:59:03 crc kubenswrapper[4873]: I0219 10:59:03.057504 4873 scope.go:117] "RemoveContainer" containerID="c7b53d3544f9dd7da88afeb6fa02fe407f19ea68d37f09767e6132e62b14b454" Feb 19 10:59:03 crc kubenswrapper[4873]: I0219 10:59:03.135842 4873 scope.go:117] "RemoveContainer" containerID="27af10342a7e1ad2491360a91681d932dd63d524dbab6ca3add1793c15d831ca" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.751380 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:10 crc kubenswrapper[4873]: E0219 10:59:10.752300 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="extract-content" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.752312 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="extract-content" Feb 19 10:59:10 crc kubenswrapper[4873]: E0219 10:59:10.752324 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="registry-server" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.752330 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="registry-server" Feb 19 10:59:10 crc kubenswrapper[4873]: E0219 10:59:10.752340 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="extract-utilities" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.752347 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="extract-utilities" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.752524 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb060ed-98a8-4d81-820a-8e2d26500534" containerName="registry-server" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.755528 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.770195 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.770361 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkcz\" (UniqueName: \"kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.770507 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.785335 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.872431 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.872536 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkcz\" (UniqueName: \"kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.872592 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.873062 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.873237 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:10 crc kubenswrapper[4873]: I0219 10:59:10.894089 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkcz\" (UniqueName: \"kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz\") pod \"certified-operators-6vtrc\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:11 crc kubenswrapper[4873]: I0219 10:59:11.077645 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:11 crc kubenswrapper[4873]: I0219 10:59:11.623788 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:11 crc kubenswrapper[4873]: W0219 10:59:11.628248 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff237fa_c66c_4108_8f98_e737561ed9ff.slice/crio-eed49f58dc087c3b47898a9bc1ffb50a29782323277a5b9db0fc3ffae7be28e4 WatchSource:0}: Error finding container eed49f58dc087c3b47898a9bc1ffb50a29782323277a5b9db0fc3ffae7be28e4: Status 404 returned error can't find the container with id eed49f58dc087c3b47898a9bc1ffb50a29782323277a5b9db0fc3ffae7be28e4 Feb 19 10:59:11 crc kubenswrapper[4873]: I0219 10:59:11.881333 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerStarted","Data":"eed49f58dc087c3b47898a9bc1ffb50a29782323277a5b9db0fc3ffae7be28e4"} Feb 19 10:59:12 crc kubenswrapper[4873]: I0219 10:59:12.900367 4873 generic.go:334] "Generic (PLEG): container finished" podID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerID="7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630" exitCode=0 Feb 19 10:59:12 crc kubenswrapper[4873]: I0219 10:59:12.900922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerDied","Data":"7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630"} Feb 19 10:59:12 crc kubenswrapper[4873]: I0219 10:59:12.904575 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:59:13 crc kubenswrapper[4873]: I0219 10:59:13.910776 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerStarted","Data":"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338"} Feb 19 10:59:15 crc kubenswrapper[4873]: I0219 10:59:15.484808 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:59:15 crc kubenswrapper[4873]: E0219 10:59:15.485356 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:59:15 crc kubenswrapper[4873]: I0219 10:59:15.930548 4873 generic.go:334] "Generic (PLEG): container finished" podID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerID="984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338" exitCode=0 Feb 19 10:59:15 crc kubenswrapper[4873]: I0219 10:59:15.930595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerDied","Data":"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338"} Feb 19 10:59:16 crc kubenswrapper[4873]: I0219 10:59:16.945720 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerStarted","Data":"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5"} Feb 19 10:59:16 crc kubenswrapper[4873]: I0219 10:59:16.997282 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vtrc" podStartSLOduration=3.335551627 podStartE2EDuration="6.997261621s" podCreationTimestamp="2026-02-19 10:59:10 +0000 UTC" firstStartedPulling="2026-02-19 10:59:12.904242573 +0000 UTC m=+4462.193674211" lastFinishedPulling="2026-02-19 10:59:16.565952567 +0000 UTC m=+4465.855384205" observedRunningTime="2026-02-19 10:59:16.97370637 +0000 UTC m=+4466.263138008" watchObservedRunningTime="2026-02-19 10:59:16.997261621 +0000 UTC m=+4466.286693249" Feb 19 10:59:21 crc kubenswrapper[4873]: I0219 10:59:21.077960 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:21 crc kubenswrapper[4873]: I0219 10:59:21.078609 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:21 crc kubenswrapper[4873]: I0219 10:59:21.163027 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:22 crc kubenswrapper[4873]: I0219 10:59:22.056213 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:22 crc kubenswrapper[4873]: I0219 10:59:22.104789 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.024790 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vtrc" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="registry-server" containerID="cri-o://24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5" gracePeriod=2 Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.540636 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.567774 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content\") pod \"dff237fa-c66c-4108-8f98-e737561ed9ff\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.636620 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dff237fa-c66c-4108-8f98-e737561ed9ff" (UID: "dff237fa-c66c-4108-8f98-e737561ed9ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.669443 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities\") pod \"dff237fa-c66c-4108-8f98-e737561ed9ff\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.671064 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities" (OuterVolumeSpecName: "utilities") pod "dff237fa-c66c-4108-8f98-e737561ed9ff" (UID: "dff237fa-c66c-4108-8f98-e737561ed9ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.672000 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gkcz\" (UniqueName: \"kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz\") pod \"dff237fa-c66c-4108-8f98-e737561ed9ff\" (UID: \"dff237fa-c66c-4108-8f98-e737561ed9ff\") " Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.672917 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.673141 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff237fa-c66c-4108-8f98-e737561ed9ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.678342 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz" (OuterVolumeSpecName: "kube-api-access-6gkcz") pod "dff237fa-c66c-4108-8f98-e737561ed9ff" (UID: "dff237fa-c66c-4108-8f98-e737561ed9ff"). InnerVolumeSpecName "kube-api-access-6gkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:59:24 crc kubenswrapper[4873]: I0219 10:59:24.775051 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gkcz\" (UniqueName: \"kubernetes.io/projected/dff237fa-c66c-4108-8f98-e737561ed9ff-kube-api-access-6gkcz\") on node \"crc\" DevicePath \"\"" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.036786 4873 generic.go:334] "Generic (PLEG): container finished" podID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerID="24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5" exitCode=0 Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.036844 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerDied","Data":"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5"} Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.038376 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vtrc" event={"ID":"dff237fa-c66c-4108-8f98-e737561ed9ff","Type":"ContainerDied","Data":"eed49f58dc087c3b47898a9bc1ffb50a29782323277a5b9db0fc3ffae7be28e4"} Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.038423 4873 scope.go:117] "RemoveContainer" containerID="24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.036873 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vtrc" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.065804 4873 scope.go:117] "RemoveContainer" containerID="984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.090615 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.105874 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vtrc"] Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.115818 4873 scope.go:117] "RemoveContainer" containerID="7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.158557 4873 scope.go:117] "RemoveContainer" containerID="24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5" Feb 19 10:59:25 crc kubenswrapper[4873]: E0219 10:59:25.159035 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5\": container with ID starting with 24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5 not found: ID does not exist" containerID="24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.159075 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5"} err="failed to get container status \"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5\": rpc error: code = NotFound desc = could not find container \"24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5\": container with ID starting with 24f7cc13e93140d4caec71be495e460686bd5d48b280fafe5a97dea3ac447fa5 not found: ID does not exist" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.159121 4873 scope.go:117] "RemoveContainer" containerID="984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338" Feb 19 10:59:25 crc kubenswrapper[4873]: E0219 10:59:25.159525 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338\": container with ID starting with 984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338 not found: ID does not exist" containerID="984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.159546 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338"} err="failed to get container status \"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338\": rpc error: code = NotFound desc = could not find container \"984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338\": container with ID starting with 984fe396a2f38d658124a2949cbf8d81c266e54ed3cdf8e85e387c0940b3d338 not found: ID does not exist" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.159558 4873 scope.go:117] "RemoveContainer" containerID="7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630" Feb 19 10:59:25 crc kubenswrapper[4873]: E0219 10:59:25.159927 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630\": container with ID starting with 7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630 not found: ID does not exist" containerID="7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.159952 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630"} err="failed to get container status \"7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630\": rpc error: code = NotFound desc = could not find container \"7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630\": container with ID starting with 7a3f863c5402ebc57d8a4349326c3916d6c7005e837fa8f8e791d46ad913d630 not found: ID does not exist" Feb 19 10:59:25 crc kubenswrapper[4873]: I0219 10:59:25.496851 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" path="/var/lib/kubelet/pods/dff237fa-c66c-4108-8f98-e737561ed9ff/volumes" Feb 19 10:59:28 crc kubenswrapper[4873]: I0219 10:59:28.485491 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:59:28 crc kubenswrapper[4873]: E0219 10:59:28.488419 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:59:42 crc kubenswrapper[4873]: I0219 10:59:42.484563 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:59:42 crc kubenswrapper[4873]: E0219 10:59:42.485290 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 10:59:53 crc kubenswrapper[4873]: I0219 10:59:53.483920 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 10:59:53 crc kubenswrapper[4873]: E0219 10:59:53.484720 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.183787 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv"] Feb 19 11:00:00 crc kubenswrapper[4873]: E0219 11:00:00.184807 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.184821 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4873]: E0219 11:00:00.184850 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.184857 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4873]: E0219 11:00:00.184869 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.184875 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.185069 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff237fa-c66c-4108-8f98-e737561ed9ff" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.185775 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.188297 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.188483 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.194271 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv"] Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.274485 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.274965 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqv7\" (UniqueName: \"kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.275134 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.376931 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmqv7\" (UniqueName: \"kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.377037 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.377092 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.378027 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.384056 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.392914 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmqv7\" (UniqueName: \"kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7\") pod \"collect-profiles-29524980-zb8sv\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.507985 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:00 crc kubenswrapper[4873]: I0219 11:00:00.964077 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv"] Feb 19 11:00:01 crc kubenswrapper[4873]: I0219 11:00:01.398532 4873 generic.go:334] "Generic (PLEG): container finished" podID="e3c4ecc6-1490-4170-9dd3-122c4417e62b" containerID="8127d29f0285689637090a99935e6d60c3fd803febae61bd0dc63c50815f28db" exitCode=0 Feb 19 11:00:01 crc kubenswrapper[4873]: I0219 11:00:01.398589 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" event={"ID":"e3c4ecc6-1490-4170-9dd3-122c4417e62b","Type":"ContainerDied","Data":"8127d29f0285689637090a99935e6d60c3fd803febae61bd0dc63c50815f28db"} Feb 19 11:00:01 crc kubenswrapper[4873]: I0219 11:00:01.398808 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" event={"ID":"e3c4ecc6-1490-4170-9dd3-122c4417e62b","Type":"ContainerStarted","Data":"f2c237e4fd2699824c9cd48a626f719ce97725405815fb96cb5e98e2747d0a3d"} Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.790272 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.929931 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume\") pod \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.930114 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmqv7\" (UniqueName: \"kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7\") pod \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.930188 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume\") pod \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\" (UID: \"e3c4ecc6-1490-4170-9dd3-122c4417e62b\") " Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.930784 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3c4ecc6-1490-4170-9dd3-122c4417e62b" (UID: "e3c4ecc6-1490-4170-9dd3-122c4417e62b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.936642 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3c4ecc6-1490-4170-9dd3-122c4417e62b" (UID: "e3c4ecc6-1490-4170-9dd3-122c4417e62b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:00:02 crc kubenswrapper[4873]: I0219 11:00:02.937684 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7" (OuterVolumeSpecName: "kube-api-access-wmqv7") pod "e3c4ecc6-1490-4170-9dd3-122c4417e62b" (UID: "e3c4ecc6-1490-4170-9dd3-122c4417e62b"). InnerVolumeSpecName "kube-api-access-wmqv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.032769 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3c4ecc6-1490-4170-9dd3-122c4417e62b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.032807 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmqv7\" (UniqueName: \"kubernetes.io/projected/e3c4ecc6-1490-4170-9dd3-122c4417e62b-kube-api-access-wmqv7\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.032818 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3c4ecc6-1490-4170-9dd3-122c4417e62b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.437291 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" event={"ID":"e3c4ecc6-1490-4170-9dd3-122c4417e62b","Type":"ContainerDied","Data":"f2c237e4fd2699824c9cd48a626f719ce97725405815fb96cb5e98e2747d0a3d"} Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.437338 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c237e4fd2699824c9cd48a626f719ce97725405815fb96cb5e98e2747d0a3d" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.437692 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-zb8sv" Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.889032 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58"] Feb 19 11:00:03 crc kubenswrapper[4873]: I0219 11:00:03.901882 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-sjf58"] Feb 19 11:00:04 crc kubenswrapper[4873]: I0219 11:00:04.484063 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:00:04 crc kubenswrapper[4873]: E0219 11:00:04.484401 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:00:05 crc kubenswrapper[4873]: I0219 11:00:05.500005 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4ec2bd-4c16-4682-873a-4fbdcc5d9580" path="/var/lib/kubelet/pods/fb4ec2bd-4c16-4682-873a-4fbdcc5d9580/volumes" Feb 19 11:00:09 crc kubenswrapper[4873]: E0219 11:00:09.122716 4873 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.156:35932->38.102.83.156:45689: write tcp 38.102.83.156:35932->38.102.83.156:45689: write: connection reset by peer Feb 19 11:00:19 crc kubenswrapper[4873]: I0219 11:00:19.484243 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:00:19 crc kubenswrapper[4873]: E0219 11:00:19.485012 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:00:33 crc kubenswrapper[4873]: I0219 11:00:33.484645 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:00:33 crc kubenswrapper[4873]: E0219 11:00:33.485596 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:00:47 crc kubenswrapper[4873]: I0219 11:00:47.484277 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:00:47 crc kubenswrapper[4873]: E0219 11:00:47.485055 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:00:59 crc kubenswrapper[4873]: I0219 11:00:59.485195 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:00:59 crc kubenswrapper[4873]: E0219 11:00:59.486227 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.156856 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524981-pxsmx"] Feb 19 11:01:00 crc kubenswrapper[4873]: E0219 11:01:00.157787 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c4ecc6-1490-4170-9dd3-122c4417e62b" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.157884 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c4ecc6-1490-4170-9dd3-122c4417e62b" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.158183 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c4ecc6-1490-4170-9dd3-122c4417e62b" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.158950 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.176977 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-pxsmx"] Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.260797 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.260893 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.261039 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42s7q\" (UniqueName: \"kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.261141 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.362786 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42s7q\" (UniqueName: \"kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.362928 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.363052 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.363142 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.384977 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.391876 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42s7q\" (UniqueName: \"kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.392338 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.392544 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle\") pod \"keystone-cron-29524981-pxsmx\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.484165 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:00 crc kubenswrapper[4873]: I0219 11:01:00.977332 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-pxsmx"] Feb 19 11:01:01 crc kubenswrapper[4873]: I0219 11:01:01.986795 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-pxsmx" event={"ID":"3f08f0c4-870d-4d9a-8a82-ce22827ce779","Type":"ContainerStarted","Data":"8690a5794fdd471bca42526814dea6677eb6e29b84003855bd13530e981b2110"} Feb 19 11:01:01 crc kubenswrapper[4873]: I0219 11:01:01.987211 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-pxsmx" event={"ID":"3f08f0c4-870d-4d9a-8a82-ce22827ce779","Type":"ContainerStarted","Data":"4033576ca7f24748022d1be039a2a747e580f1f4e4d8f70a15ebec2763443c71"} Feb 19 11:01:02 crc kubenswrapper[4873]: I0219 11:01:02.004629 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524981-pxsmx" podStartSLOduration=2.004603053 podStartE2EDuration="2.004603053s" podCreationTimestamp="2026-02-19 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:01:02.00368723 +0000 UTC m=+4571.293118868" watchObservedRunningTime="2026-02-19 11:01:02.004603053 +0000 UTC m=+4571.294034691" Feb 19 11:01:03 crc kubenswrapper[4873]: I0219 11:01:03.218543 4873 scope.go:117] "RemoveContainer" containerID="51b872a4026735697f0f9cc00b395427fbc06efd93a529d94f5319e0d220778e" Feb 19 11:01:06 crc kubenswrapper[4873]: I0219 11:01:06.031938 4873 generic.go:334] "Generic (PLEG): container finished" podID="3f08f0c4-870d-4d9a-8a82-ce22827ce779" containerID="8690a5794fdd471bca42526814dea6677eb6e29b84003855bd13530e981b2110" exitCode=0 Feb 19 11:01:06 crc kubenswrapper[4873]: I0219 11:01:06.032236 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-pxsmx" event={"ID":"3f08f0c4-870d-4d9a-8a82-ce22827ce779","Type":"ContainerDied","Data":"8690a5794fdd471bca42526814dea6677eb6e29b84003855bd13530e981b2110"} Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.457773 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.516396 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle\") pod \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.516669 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys\") pod \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.516692 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data\") pod \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.516799 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42s7q\" (UniqueName: \"kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q\") pod \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\" (UID: \"3f08f0c4-870d-4d9a-8a82-ce22827ce779\") " Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.522544 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3f08f0c4-870d-4d9a-8a82-ce22827ce779" (UID: "3f08f0c4-870d-4d9a-8a82-ce22827ce779"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.522551 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q" (OuterVolumeSpecName: "kube-api-access-42s7q") pod "3f08f0c4-870d-4d9a-8a82-ce22827ce779" (UID: "3f08f0c4-870d-4d9a-8a82-ce22827ce779"). InnerVolumeSpecName "kube-api-access-42s7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.546825 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f08f0c4-870d-4d9a-8a82-ce22827ce779" (UID: "3f08f0c4-870d-4d9a-8a82-ce22827ce779"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.575247 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data" (OuterVolumeSpecName: "config-data") pod "3f08f0c4-870d-4d9a-8a82-ce22827ce779" (UID: "3f08f0c4-870d-4d9a-8a82-ce22827ce779"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.619450 4873 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.619479 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.619492 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42s7q\" (UniqueName: \"kubernetes.io/projected/3f08f0c4-870d-4d9a-8a82-ce22827ce779-kube-api-access-42s7q\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:07 crc kubenswrapper[4873]: I0219 11:01:07.619502 4873 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f08f0c4-870d-4d9a-8a82-ce22827ce779-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:08 crc kubenswrapper[4873]: I0219 11:01:08.053909 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-pxsmx" event={"ID":"3f08f0c4-870d-4d9a-8a82-ce22827ce779","Type":"ContainerDied","Data":"4033576ca7f24748022d1be039a2a747e580f1f4e4d8f70a15ebec2763443c71"} Feb 19 11:01:08 crc kubenswrapper[4873]: I0219 11:01:08.054587 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4033576ca7f24748022d1be039a2a747e580f1f4e4d8f70a15ebec2763443c71" Feb 19 11:01:08 crc kubenswrapper[4873]: I0219 11:01:08.053973 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-pxsmx" Feb 19 11:01:13 crc kubenswrapper[4873]: I0219 11:01:13.484130 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:01:13 crc kubenswrapper[4873]: E0219 11:01:13.484737 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:01:27 crc kubenswrapper[4873]: I0219 11:01:27.484725 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:01:27 crc kubenswrapper[4873]: E0219 11:01:27.485907 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:01:38 crc kubenswrapper[4873]: I0219 11:01:38.485166 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:01:38 crc kubenswrapper[4873]: E0219 11:01:38.485911 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:01:53 crc kubenswrapper[4873]: I0219 11:01:53.485237 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:01:54 crc kubenswrapper[4873]: I0219 11:01:54.486086 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739"} Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.281296 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:09 crc kubenswrapper[4873]: E0219 11:03:09.282146 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f08f0c4-870d-4d9a-8a82-ce22827ce779" containerName="keystone-cron" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.282158 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f08f0c4-870d-4d9a-8a82-ce22827ce779" containerName="keystone-cron" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.282340 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f08f0c4-870d-4d9a-8a82-ce22827ce779" containerName="keystone-cron" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.283747 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.290997 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.395632 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.395683 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.395713 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4td\" (UniqueName: \"kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.498298 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.498340 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.498372 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4td\" (UniqueName: \"kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.498855 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.499023 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.529311 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4td\" (UniqueName: \"kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td\") pod \"redhat-operators-plncc\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:09 crc kubenswrapper[4873]: I0219 11:03:09.623664 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:10 crc kubenswrapper[4873]: I0219 11:03:10.203501 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:11 crc kubenswrapper[4873]: I0219 11:03:11.213254 4873 generic.go:334] "Generic (PLEG): container finished" podID="6e290883-f526-41db-a353-55a50f744490" containerID="6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe" exitCode=0 Feb 19 11:03:11 crc kubenswrapper[4873]: I0219 11:03:11.213315 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerDied","Data":"6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe"} Feb 19 11:03:11 crc kubenswrapper[4873]: I0219 11:03:11.213581 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerStarted","Data":"5e8aa772e7b0649a59737f94f1f93bbab70267c1d35f86ec43c903ae04595e28"} Feb 19 11:03:13 crc kubenswrapper[4873]: I0219 11:03:13.232256 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerStarted","Data":"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd"} Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.236928 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.239628 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.258087 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.290880 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.291311 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.291501 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlpd\" (UniqueName: \"kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.393150 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.393241 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlpd\" (UniqueName: \"kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.393353 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.393818 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.393831 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.420370 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlpd\" (UniqueName: \"kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd\") pod \"redhat-marketplace-svn5x\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:17 crc kubenswrapper[4873]: I0219 11:03:17.557077 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:18 crc kubenswrapper[4873]: I0219 11:03:18.142471 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:18 crc kubenswrapper[4873]: I0219 11:03:18.312552 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerStarted","Data":"6196d4d73d0291d966aa2df84cf6131187ad22159d39b95f76a24db6129df9e9"} Feb 19 11:03:18 crc kubenswrapper[4873]: I0219 11:03:18.315518 4873 generic.go:334] "Generic (PLEG): container finished" podID="6e290883-f526-41db-a353-55a50f744490" containerID="72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd" exitCode=0 Feb 19 11:03:18 crc kubenswrapper[4873]: I0219 11:03:18.315547 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerDied","Data":"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd"} Feb 19 11:03:19 crc kubenswrapper[4873]: I0219 11:03:19.327538 4873 generic.go:334] "Generic (PLEG): container finished" podID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerID="20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58" exitCode=0 Feb 19 11:03:19 crc kubenswrapper[4873]: I0219 11:03:19.327848 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerDied","Data":"20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58"} Feb 19 11:03:20 crc kubenswrapper[4873]: I0219 11:03:20.340675 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerStarted","Data":"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3"} Feb 19 11:03:20 crc kubenswrapper[4873]: I0219 11:03:20.362580 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-plncc" podStartSLOduration=3.243244492 podStartE2EDuration="11.36255459s" podCreationTimestamp="2026-02-19 11:03:09 +0000 UTC" firstStartedPulling="2026-02-19 11:03:11.216199428 +0000 UTC m=+4700.505631066" lastFinishedPulling="2026-02-19 11:03:19.335509526 +0000 UTC m=+4708.624941164" observedRunningTime="2026-02-19 11:03:20.356722306 +0000 UTC m=+4709.646153954" watchObservedRunningTime="2026-02-19 11:03:20.36255459 +0000 UTC m=+4709.651986248" Feb 19 11:03:21 crc kubenswrapper[4873]: I0219 11:03:21.372504 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerStarted","Data":"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4"} Feb 19 11:03:22 crc kubenswrapper[4873]: I0219 11:03:22.385245 4873 generic.go:334] "Generic (PLEG): container finished" podID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerID="dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4" exitCode=0 Feb 19 11:03:22 crc kubenswrapper[4873]: I0219 11:03:22.385325 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerDied","Data":"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4"} Feb 19 11:03:25 crc kubenswrapper[4873]: I0219 11:03:25.416837 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerStarted","Data":"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511"} Feb 19 11:03:25 crc kubenswrapper[4873]: I0219 11:03:25.440289 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svn5x" podStartSLOduration=3.891249643 podStartE2EDuration="8.440273866s" podCreationTimestamp="2026-02-19 11:03:17 +0000 UTC" firstStartedPulling="2026-02-19 11:03:19.334640943 +0000 UTC m=+4708.624072581" lastFinishedPulling="2026-02-19 11:03:23.883665166 +0000 UTC m=+4713.173096804" observedRunningTime="2026-02-19 11:03:25.4396756 +0000 UTC m=+4714.729107238" watchObservedRunningTime="2026-02-19 11:03:25.440273866 +0000 UTC m=+4714.729705504" Feb 19 11:03:27 crc kubenswrapper[4873]: I0219 11:03:27.558193 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:27 crc kubenswrapper[4873]: I0219 11:03:27.558532 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:27 crc kubenswrapper[4873]: I0219 11:03:27.608380 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:29 crc kubenswrapper[4873]: I0219 11:03:29.625671 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:29 crc kubenswrapper[4873]: I0219 11:03:29.626004 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:30 crc kubenswrapper[4873]: I0219 11:03:30.673542 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-plncc" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="registry-server" probeResult="failure" output=< Feb 19 11:03:30 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:03:30 crc kubenswrapper[4873]: > Feb 19 11:03:37 crc kubenswrapper[4873]: I0219 11:03:37.610998 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:37 crc kubenswrapper[4873]: I0219 11:03:37.664332 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:38 crc kubenswrapper[4873]: I0219 11:03:38.531809 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svn5x" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="registry-server" containerID="cri-o://3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511" gracePeriod=2 Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.041786 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.177974 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlpd\" (UniqueName: \"kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd\") pod \"e236a7a3-ab66-4f76-aba7-ffed81663143\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.178152 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities\") pod \"e236a7a3-ab66-4f76-aba7-ffed81663143\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.178185 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content\") pod \"e236a7a3-ab66-4f76-aba7-ffed81663143\" (UID: \"e236a7a3-ab66-4f76-aba7-ffed81663143\") " Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.179010 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities" (OuterVolumeSpecName: "utilities") pod "e236a7a3-ab66-4f76-aba7-ffed81663143" (UID: "e236a7a3-ab66-4f76-aba7-ffed81663143"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.208302 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd" (OuterVolumeSpecName: "kube-api-access-xrlpd") pod "e236a7a3-ab66-4f76-aba7-ffed81663143" (UID: "e236a7a3-ab66-4f76-aba7-ffed81663143"). InnerVolumeSpecName "kube-api-access-xrlpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.246352 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e236a7a3-ab66-4f76-aba7-ffed81663143" (UID: "e236a7a3-ab66-4f76-aba7-ffed81663143"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.281755 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlpd\" (UniqueName: \"kubernetes.io/projected/e236a7a3-ab66-4f76-aba7-ffed81663143-kube-api-access-xrlpd\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.281815 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.281828 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e236a7a3-ab66-4f76-aba7-ffed81663143-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.541693 4873 generic.go:334] "Generic (PLEG): container finished" podID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerID="3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511" exitCode=0 Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.541750 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerDied","Data":"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511"} Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.541784 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svn5x" event={"ID":"e236a7a3-ab66-4f76-aba7-ffed81663143","Type":"ContainerDied","Data":"6196d4d73d0291d966aa2df84cf6131187ad22159d39b95f76a24db6129df9e9"} Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.541800 4873 scope.go:117] "RemoveContainer" containerID="3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.541918 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svn5x" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.565216 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.565632 4873 scope.go:117] "RemoveContainer" containerID="dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.578560 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svn5x"] Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.589716 4873 scope.go:117] "RemoveContainer" containerID="20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.637744 4873 scope.go:117] "RemoveContainer" containerID="3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511" Feb 19 11:03:39 crc kubenswrapper[4873]: E0219 11:03:39.638192 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511\": container with ID starting with 3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511 not found: ID does not exist" containerID="3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.638231 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511"} err="failed to get container status \"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511\": rpc error: code = NotFound desc = could not find container \"3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511\": container with ID starting with 3f172914177b2a80a2695f8e24b026849a54c503b1183af840362134ca514511 not found: ID does not exist" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.638264 4873 scope.go:117] "RemoveContainer" containerID="dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4" Feb 19 11:03:39 crc kubenswrapper[4873]: E0219 11:03:39.638565 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4\": container with ID starting with dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4 not found: ID does not exist" containerID="dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.638589 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4"} err="failed to get container status \"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4\": rpc error: code = NotFound desc = could not find container \"dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4\": container with ID starting with dd393fb68f4b4f99e55a564abf28d6d2280686070d70f6821f08f15dc6520ca4 not found: ID does not exist" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.638604 4873 scope.go:117] "RemoveContainer" containerID="20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58" Feb 19 11:03:39 crc kubenswrapper[4873]: E0219 11:03:39.638836 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58\": container with ID starting with 20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58 not found: ID does not exist" containerID="20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.638860 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58"} err="failed to get container status \"20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58\": rpc error: code = NotFound desc = could not find container \"20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58\": container with ID starting with 20f09bfbbc23c5c4835fd8a5c5af61ad45a563038f321621749eef3a9dd96f58 not found: ID does not exist" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.674411 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:39 crc kubenswrapper[4873]: I0219 11:03:39.722916 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:40 crc kubenswrapper[4873]: I0219 11:03:40.847036 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:41 crc kubenswrapper[4873]: I0219 11:03:41.496503 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" path="/var/lib/kubelet/pods/e236a7a3-ab66-4f76-aba7-ffed81663143/volumes" Feb 19 11:03:41 crc kubenswrapper[4873]: I0219 11:03:41.566226 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-plncc" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="registry-server" containerID="cri-o://36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3" gracePeriod=2 Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.074066 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.146869 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities\") pod \"6e290883-f526-41db-a353-55a50f744490\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.147124 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content\") pod \"6e290883-f526-41db-a353-55a50f744490\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.147226 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl4td\" (UniqueName: \"kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td\") pod \"6e290883-f526-41db-a353-55a50f744490\" (UID: \"6e290883-f526-41db-a353-55a50f744490\") " Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.148460 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities" (OuterVolumeSpecName: "utilities") pod "6e290883-f526-41db-a353-55a50f744490" (UID: "6e290883-f526-41db-a353-55a50f744490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.250661 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.272265 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e290883-f526-41db-a353-55a50f744490" (UID: "6e290883-f526-41db-a353-55a50f744490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.352872 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e290883-f526-41db-a353-55a50f744490-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.577504 4873 generic.go:334] "Generic (PLEG): container finished" podID="6e290883-f526-41db-a353-55a50f744490" containerID="36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3" exitCode=0 Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.577556 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerDied","Data":"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3"} Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.577587 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plncc" event={"ID":"6e290883-f526-41db-a353-55a50f744490","Type":"ContainerDied","Data":"5e8aa772e7b0649a59737f94f1f93bbab70267c1d35f86ec43c903ae04595e28"} Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.577609 4873 scope.go:117] "RemoveContainer" containerID="36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.577750 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plncc" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.602915 4873 scope.go:117] "RemoveContainer" containerID="72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.785268 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td" (OuterVolumeSpecName: "kube-api-access-gl4td") pod "6e290883-f526-41db-a353-55a50f744490" (UID: "6e290883-f526-41db-a353-55a50f744490"). InnerVolumeSpecName "kube-api-access-gl4td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.821042 4873 scope.go:117] "RemoveContainer" containerID="6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.865298 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl4td\" (UniqueName: \"kubernetes.io/projected/6e290883-f526-41db-a353-55a50f744490-kube-api-access-gl4td\") on node \"crc\" DevicePath \"\"" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.920278 4873 scope.go:117] "RemoveContainer" containerID="36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3" Feb 19 11:03:42 crc kubenswrapper[4873]: E0219 11:03:42.920905 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3\": container with ID starting with 36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3 not found: ID does not exist" containerID="36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.921077 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3"} err="failed to get container status \"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3\": rpc error: code = NotFound desc = could not find container \"36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3\": container with ID starting with 36dd93803a9bba619ed90e1911c35791dfca87f72b721514b9e79a923453a2b3 not found: ID does not exist" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.921276 4873 scope.go:117] "RemoveContainer" containerID="72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd" Feb 19 11:03:42 crc kubenswrapper[4873]: E0219 11:03:42.921701 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd\": container with ID starting with 72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd not found: ID does not exist" containerID="72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.921724 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd"} err="failed to get container status \"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd\": rpc error: code = NotFound desc = could not find container \"72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd\": container with ID starting with 72c590d804613f1c773acf673eb71ec0069c6e7e290132e13d2a88008e3611cd not found: ID does not exist" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.921737 4873 scope.go:117] "RemoveContainer" containerID="6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe" Feb 19 11:03:42 crc kubenswrapper[4873]: E0219 11:03:42.921916 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe\": container with ID starting with 6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe not found: ID does not exist" containerID="6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.921937 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe"} err="failed to get container status \"6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe\": rpc error: code = NotFound desc = could not find container \"6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe\": container with ID starting with 6de9ee853a1df07b7a365ba1104b2be88eabbf7c1da72e76e0f20f1e8736adbe not found: ID does not exist" Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.983119 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:42 crc kubenswrapper[4873]: I0219 11:03:42.993357 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-plncc"] Feb 19 11:03:43 crc kubenswrapper[4873]: I0219 11:03:43.496819 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e290883-f526-41db-a353-55a50f744490" path="/var/lib/kubelet/pods/6e290883-f526-41db-a353-55a50f744490/volumes" Feb 19 11:04:18 crc kubenswrapper[4873]: I0219 11:04:18.240699 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:04:18 crc kubenswrapper[4873]: I0219 11:04:18.241371 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:04:48 crc kubenswrapper[4873]: I0219 11:04:48.240718 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:04:48 crc kubenswrapper[4873]: I0219 11:04:48.241473 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.737743 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739177 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739204 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739235 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="extract-content" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739243 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="extract-content" Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739261 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="extract-content" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739269 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="extract-content" Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739300 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="extract-utilities" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739309 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="extract-utilities" Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739322 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="extract-utilities" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739330 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="extract-utilities" Feb 19 11:05:03 crc kubenswrapper[4873]: E0219 11:05:03.739344 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739351 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739567 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e236a7a3-ab66-4f76-aba7-ffed81663143" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.739590 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e290883-f526-41db-a353-55a50f744490" containerName="registry-server" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.741029 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.744305 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.744426 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg72n\" (UniqueName: \"kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.744588 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.750042 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.846138 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.846226 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg72n\" (UniqueName: \"kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.846247 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.846722 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.846764 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:03 crc kubenswrapper[4873]: I0219 11:05:03.866437 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg72n\" (UniqueName: \"kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n\") pod \"community-operators-mk7zr\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:04 crc kubenswrapper[4873]: I0219 11:05:04.089221 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:05 crc kubenswrapper[4873]: I0219 11:05:05.038675 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:05 crc kubenswrapper[4873]: I0219 11:05:05.354856 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerStarted","Data":"49e088349e642ccc00b12a3c3c974f30b5fd5aa45f7ce3ce9abd0f16f1f209ae"} Feb 19 11:05:06 crc kubenswrapper[4873]: I0219 11:05:06.366155 4873 generic.go:334] "Generic (PLEG): container finished" podID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerID="31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687" exitCode=0 Feb 19 11:05:06 crc kubenswrapper[4873]: I0219 11:05:06.366198 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerDied","Data":"31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687"} Feb 19 11:05:06 crc kubenswrapper[4873]: I0219 11:05:06.369291 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:05:08 crc kubenswrapper[4873]: I0219 11:05:08.403240 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerStarted","Data":"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca"} Feb 19 11:05:09 crc kubenswrapper[4873]: I0219 11:05:09.413473 4873 generic.go:334] "Generic (PLEG): container finished" podID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerID="202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca" exitCode=0 Feb 19 11:05:09 crc kubenswrapper[4873]: I0219 11:05:09.413524 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerDied","Data":"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca"} Feb 19 11:05:10 crc kubenswrapper[4873]: I0219 11:05:10.424079 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerStarted","Data":"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a"} Feb 19 11:05:10 crc kubenswrapper[4873]: I0219 11:05:10.445649 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk7zr" podStartSLOduration=3.747127523 podStartE2EDuration="7.445627854s" podCreationTimestamp="2026-02-19 11:05:03 +0000 UTC" firstStartedPulling="2026-02-19 11:05:06.368987145 +0000 UTC m=+4815.658418803" lastFinishedPulling="2026-02-19 11:05:10.067487496 +0000 UTC m=+4819.356919134" observedRunningTime="2026-02-19 11:05:10.439966015 +0000 UTC m=+4819.729397653" watchObservedRunningTime="2026-02-19 11:05:10.445627854 +0000 UTC m=+4819.735059492" Feb 19 11:05:14 crc kubenswrapper[4873]: I0219 11:05:14.089594 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:14 crc kubenswrapper[4873]: I0219 11:05:14.090074 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:14 crc kubenswrapper[4873]: I0219 11:05:14.143843 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:18 crc kubenswrapper[4873]: I0219 11:05:18.240621 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:05:18 crc kubenswrapper[4873]: I0219 11:05:18.241374 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:05:18 crc kubenswrapper[4873]: I0219 11:05:18.242122 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:05:18 crc kubenswrapper[4873]: I0219 11:05:18.243180 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:05:18 crc kubenswrapper[4873]: I0219 11:05:18.243271 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739" gracePeriod=600 Feb 19 11:05:19 crc kubenswrapper[4873]: I0219 11:05:19.513952 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739" exitCode=0 Feb 19 11:05:19 crc kubenswrapper[4873]: I0219 11:05:19.514055 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739"} Feb 19 11:05:19 crc kubenswrapper[4873]: I0219 11:05:19.514705 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7"} Feb 19 11:05:19 crc kubenswrapper[4873]: I0219 11:05:19.514737 4873 scope.go:117] "RemoveContainer" containerID="22be4fcac63d93394195fc713b152ebad1813a068085fb725288b1a582e6c2de" Feb 19 11:05:24 crc kubenswrapper[4873]: I0219 11:05:24.141660 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:24 crc kubenswrapper[4873]: I0219 11:05:24.228814 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:24 crc kubenswrapper[4873]: I0219 11:05:24.560248 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk7zr" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="registry-server" containerID="cri-o://41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a" gracePeriod=2 Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.034917 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.202894 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content\") pod \"0f47487c-2e96-41ed-963c-de6c2f8bf152\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.202988 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg72n\" (UniqueName: \"kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n\") pod \"0f47487c-2e96-41ed-963c-de6c2f8bf152\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.203240 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities\") pod \"0f47487c-2e96-41ed-963c-de6c2f8bf152\" (UID: \"0f47487c-2e96-41ed-963c-de6c2f8bf152\") " Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.204537 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities" (OuterVolumeSpecName: "utilities") pod "0f47487c-2e96-41ed-963c-de6c2f8bf152" (UID: "0f47487c-2e96-41ed-963c-de6c2f8bf152"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.215319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n" (OuterVolumeSpecName: "kube-api-access-hg72n") pod "0f47487c-2e96-41ed-963c-de6c2f8bf152" (UID: "0f47487c-2e96-41ed-963c-de6c2f8bf152"). InnerVolumeSpecName "kube-api-access-hg72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.265854 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f47487c-2e96-41ed-963c-de6c2f8bf152" (UID: "0f47487c-2e96-41ed-963c-de6c2f8bf152"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.306271 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg72n\" (UniqueName: \"kubernetes.io/projected/0f47487c-2e96-41ed-963c-de6c2f8bf152-kube-api-access-hg72n\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.306324 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.306344 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f47487c-2e96-41ed-963c-de6c2f8bf152-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.572551 4873 generic.go:334] "Generic (PLEG): container finished" podID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerID="41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a" exitCode=0 Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.572591 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerDied","Data":"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a"} Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.572621 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk7zr" event={"ID":"0f47487c-2e96-41ed-963c-de6c2f8bf152","Type":"ContainerDied","Data":"49e088349e642ccc00b12a3c3c974f30b5fd5aa45f7ce3ce9abd0f16f1f209ae"} Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.572641 4873 scope.go:117] "RemoveContainer" containerID="41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.572845 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk7zr" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.603852 4873 scope.go:117] "RemoveContainer" containerID="202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.613462 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.628418 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk7zr"] Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.639681 4873 scope.go:117] "RemoveContainer" containerID="31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.686676 4873 scope.go:117] "RemoveContainer" containerID="41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a" Feb 19 11:05:25 crc kubenswrapper[4873]: E0219 11:05:25.687362 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a\": container with ID starting with 41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a not found: ID does not exist" containerID="41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.687421 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a"} err="failed to get container status \"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a\": rpc error: code = NotFound desc = could not find container \"41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a\": container with ID starting with 41137e75f9f5c19cc876f43bc5ed5e45aaa46d6bad5e34f1de8feb9dc0615e8a not found: ID does not exist" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.687454 4873 scope.go:117] "RemoveContainer" containerID="202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca" Feb 19 11:05:25 crc kubenswrapper[4873]: E0219 11:05:25.687835 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca\": container with ID starting with 202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca not found: ID does not exist" containerID="202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.687862 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca"} err="failed to get container status \"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca\": rpc error: code = NotFound desc = could not find container \"202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca\": container with ID starting with 202f50cf0f27476ee66f9c1c7fe1a62c5909a5ac115a540d13f5e5df05dd9cca not found: ID does not exist" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.687881 4873 scope.go:117] "RemoveContainer" containerID="31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687" Feb 19 11:05:25 crc kubenswrapper[4873]: E0219 11:05:25.688236 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687\": container with ID starting with 31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687 not found: ID does not exist" containerID="31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687" Feb 19 11:05:25 crc kubenswrapper[4873]: I0219 11:05:25.688254 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687"} err="failed to get container status \"31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687\": rpc error: code = NotFound desc = could not find container \"31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687\": container with ID starting with 31b07f779794e549307f2d11327a6f33ea37e2331b358a4ed386e94c4d86c687 not found: ID does not exist" Feb 19 11:05:27 crc kubenswrapper[4873]: I0219 11:05:27.496383 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" path="/var/lib/kubelet/pods/0f47487c-2e96-41ed-963c-de6c2f8bf152/volumes" Feb 19 11:07:48 crc kubenswrapper[4873]: I0219 11:07:48.240633 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:07:48 crc kubenswrapper[4873]: I0219 11:07:48.241052 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:18 crc kubenswrapper[4873]: I0219 11:08:18.240596 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:08:18 crc kubenswrapper[4873]: I0219 11:08:18.241134 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.240610 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.242351 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.242497 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.243465 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.243675 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" gracePeriod=600 Feb 19 11:08:48 crc kubenswrapper[4873]: E0219 11:08:48.369594 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.694886 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" exitCode=0 Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.694936 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7"} Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.694972 4873 scope.go:117] "RemoveContainer" containerID="cb43480cbf19d4507b02920f7c71a6827821b09f9e9f251bd0c0f1803ed97739" Feb 19 11:08:48 crc kubenswrapper[4873]: I0219 11:08:48.695605 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:08:48 crc kubenswrapper[4873]: E0219 11:08:48.695898 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:09:03 crc kubenswrapper[4873]: I0219 11:09:03.495710 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:09:03 crc kubenswrapper[4873]: E0219 11:09:03.500801 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:09:17 crc kubenswrapper[4873]: I0219 11:09:17.486146 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:09:17 crc kubenswrapper[4873]: E0219 11:09:17.487178 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.263564 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:25 crc kubenswrapper[4873]: E0219 11:09:25.264514 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="registry-server" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.264529 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="registry-server" Feb 19 11:09:25 crc kubenswrapper[4873]: E0219 11:09:25.264550 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="extract-content" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.264556 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="extract-content" Feb 19 11:09:25 crc kubenswrapper[4873]: E0219 11:09:25.264588 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="extract-utilities" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.264594 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="extract-utilities" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.264773 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f47487c-2e96-41ed-963c-de6c2f8bf152" containerName="registry-server" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.266406 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.281005 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.414139 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.414189 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.414325 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gjwd\" (UniqueName: \"kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.517423 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.517492 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.517551 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gjwd\" (UniqueName: \"kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.518718 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.519024 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.537418 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gjwd\" (UniqueName: \"kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd\") pod \"certified-operators-jdlp2\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:25 crc kubenswrapper[4873]: I0219 11:09:25.600010 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:26 crc kubenswrapper[4873]: I0219 11:09:26.114911 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:27 crc kubenswrapper[4873]: I0219 11:09:27.036355 4873 generic.go:334] "Generic (PLEG): container finished" podID="84eca575-0570-46eb-9694-8646236d7aba" containerID="34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117" exitCode=0 Feb 19 11:09:27 crc kubenswrapper[4873]: I0219 11:09:27.036767 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerDied","Data":"34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117"} Feb 19 11:09:27 crc kubenswrapper[4873]: I0219 11:09:27.036800 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerStarted","Data":"4a8305bd2296efcc2dbff347d1dc9f7b7681ae97e62bc72f892f55c705a6994a"} Feb 19 11:09:29 crc kubenswrapper[4873]: I0219 11:09:29.060595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerStarted","Data":"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a"} Feb 19 11:09:29 crc kubenswrapper[4873]: I0219 11:09:29.485285 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:09:29 crc kubenswrapper[4873]: E0219 11:09:29.485937 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:09:30 crc kubenswrapper[4873]: I0219 11:09:30.071260 4873 generic.go:334] "Generic (PLEG): container finished" podID="84eca575-0570-46eb-9694-8646236d7aba" containerID="bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a" exitCode=0 Feb 19 11:09:30 crc kubenswrapper[4873]: I0219 11:09:30.071304 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerDied","Data":"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a"} Feb 19 11:09:31 crc kubenswrapper[4873]: I0219 11:09:31.081548 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerStarted","Data":"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871"} Feb 19 11:09:31 crc kubenswrapper[4873]: I0219 11:09:31.104453 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jdlp2" podStartSLOduration=2.653862857 podStartE2EDuration="6.104435463s" podCreationTimestamp="2026-02-19 11:09:25 +0000 UTC" firstStartedPulling="2026-02-19 11:09:27.038363537 +0000 UTC m=+5076.327795175" lastFinishedPulling="2026-02-19 11:09:30.488936143 +0000 UTC m=+5079.778367781" observedRunningTime="2026-02-19 11:09:31.096689488 +0000 UTC m=+5080.386121146" watchObservedRunningTime="2026-02-19 11:09:31.104435463 +0000 UTC m=+5080.393867101" Feb 19 11:09:35 crc kubenswrapper[4873]: I0219 11:09:35.600883 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:35 crc kubenswrapper[4873]: I0219 11:09:35.602040 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:35 crc kubenswrapper[4873]: I0219 11:09:35.648222 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:36 crc kubenswrapper[4873]: I0219 11:09:36.174654 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:36 crc kubenswrapper[4873]: I0219 11:09:36.222933 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.151394 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jdlp2" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="registry-server" containerID="cri-o://f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871" gracePeriod=2 Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.649228 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.802314 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content\") pod \"84eca575-0570-46eb-9694-8646236d7aba\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.802662 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities\") pod \"84eca575-0570-46eb-9694-8646236d7aba\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.803809 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities" (OuterVolumeSpecName: "utilities") pod "84eca575-0570-46eb-9694-8646236d7aba" (UID: "84eca575-0570-46eb-9694-8646236d7aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.803946 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gjwd\" (UniqueName: \"kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd\") pod \"84eca575-0570-46eb-9694-8646236d7aba\" (UID: \"84eca575-0570-46eb-9694-8646236d7aba\") " Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.805191 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.810869 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd" (OuterVolumeSpecName: "kube-api-access-5gjwd") pod "84eca575-0570-46eb-9694-8646236d7aba" (UID: "84eca575-0570-46eb-9694-8646236d7aba"). InnerVolumeSpecName "kube-api-access-5gjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.866272 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84eca575-0570-46eb-9694-8646236d7aba" (UID: "84eca575-0570-46eb-9694-8646236d7aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.906677 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gjwd\" (UniqueName: \"kubernetes.io/projected/84eca575-0570-46eb-9694-8646236d7aba-kube-api-access-5gjwd\") on node \"crc\" DevicePath \"\"" Feb 19 11:09:38 crc kubenswrapper[4873]: I0219 11:09:38.906718 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84eca575-0570-46eb-9694-8646236d7aba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.161907 4873 generic.go:334] "Generic (PLEG): container finished" podID="84eca575-0570-46eb-9694-8646236d7aba" containerID="f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871" exitCode=0 Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.161957 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerDied","Data":"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871"} Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.161988 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdlp2" event={"ID":"84eca575-0570-46eb-9694-8646236d7aba","Type":"ContainerDied","Data":"4a8305bd2296efcc2dbff347d1dc9f7b7681ae97e62bc72f892f55c705a6994a"} Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.162007 4873 scope.go:117] "RemoveContainer" containerID="f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.162177 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdlp2" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.191055 4873 scope.go:117] "RemoveContainer" containerID="bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.220312 4873 scope.go:117] "RemoveContainer" containerID="34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.228560 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.237885 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jdlp2"] Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.282210 4873 scope.go:117] "RemoveContainer" containerID="f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871" Feb 19 11:09:39 crc kubenswrapper[4873]: E0219 11:09:39.282658 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871\": container with ID starting with f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871 not found: ID does not exist" containerID="f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.282782 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871"} err="failed to get container status \"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871\": rpc error: code = NotFound desc = could not find container \"f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871\": container with ID starting with f71393ccfb697c7f860c7995ccd8ba3b74dd9fa62fe2d5e21c727436b4f3d871 not found: ID does not exist" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.282913 4873 scope.go:117] "RemoveContainer" containerID="bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a" Feb 19 11:09:39 crc kubenswrapper[4873]: E0219 11:09:39.282837 4873 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84eca575_0570_46eb_9694_8646236d7aba.slice/crio-4a8305bd2296efcc2dbff347d1dc9f7b7681ae97e62bc72f892f55c705a6994a\": RecentStats: unable to find data in memory cache]" Feb 19 11:09:39 crc kubenswrapper[4873]: E0219 11:09:39.283440 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a\": container with ID starting with bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a not found: ID does not exist" containerID="bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.283469 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a"} err="failed to get container status \"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a\": rpc error: code = NotFound desc = could not find container \"bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a\": container with ID starting with bd4a9f0919f6049daf2c8d424e66fe24bb3a5c9ea29b2e45e7d71be383355a3a not found: ID does not exist" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.283487 4873 scope.go:117] "RemoveContainer" containerID="34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117" Feb 19 11:09:39 crc kubenswrapper[4873]: E0219 11:09:39.283947 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117\": container with ID starting with 34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117 not found: ID does not exist" containerID="34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.284011 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117"} err="failed to get container status \"34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117\": rpc error: code = NotFound desc = could not find container \"34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117\": container with ID starting with 34065fcc2769694373b98c09ba55e3507a8fb714d7e6db36212106b4806b4117 not found: ID does not exist" Feb 19 11:09:39 crc kubenswrapper[4873]: I0219 11:09:39.499395 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84eca575-0570-46eb-9694-8646236d7aba" path="/var/lib/kubelet/pods/84eca575-0570-46eb-9694-8646236d7aba/volumes" Feb 19 11:09:43 crc kubenswrapper[4873]: I0219 11:09:43.484788 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:09:43 crc kubenswrapper[4873]: E0219 11:09:43.485878 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:09:58 crc kubenswrapper[4873]: I0219 11:09:58.484551 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:09:58 crc kubenswrapper[4873]: E0219 11:09:58.485213 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:10:13 crc kubenswrapper[4873]: I0219 11:10:13.488135 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:10:13 crc kubenswrapper[4873]: E0219 11:10:13.488959 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:10:24 crc kubenswrapper[4873]: I0219 11:10:24.484861 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:10:24 crc kubenswrapper[4873]: E0219 11:10:24.485677 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:10:29 crc kubenswrapper[4873]: I0219 11:10:29.640811 4873 generic.go:334] "Generic (PLEG): container finished" podID="5e5a79da-a068-4a68-ba79-6719ea0fb353" containerID="edd9b7584d145cddbcf9d8449ca8d5546aa8224b7f3731235eeab85ccb091862" exitCode=0 Feb 19 11:10:29 crc kubenswrapper[4873]: I0219 11:10:29.640900 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e5a79da-a068-4a68-ba79-6719ea0fb353","Type":"ContainerDied","Data":"edd9b7584d145cddbcf9d8449ca8d5546aa8224b7f3731235eeab85ccb091862"} Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.005723 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.119706 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.119812 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.119878 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.119931 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.119973 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120021 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120064 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120091 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120177 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqm6\" (UniqueName: \"kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6\") pod \"5e5a79da-a068-4a68-ba79-6719ea0fb353\" (UID: \"5e5a79da-a068-4a68-ba79-6719ea0fb353\") " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120682 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.120839 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data" (OuterVolumeSpecName: "config-data") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.128369 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6" (OuterVolumeSpecName: "kube-api-access-hdqm6") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "kube-api-access-hdqm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.129649 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.130827 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.151923 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.155185 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.155676 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.182733 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5e5a79da-a068-4a68-ba79-6719ea0fb353" (UID: "5e5a79da-a068-4a68-ba79-6719ea0fb353"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222391 4873 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222431 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222443 4873 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222452 4873 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5e5a79da-a068-4a68-ba79-6719ea0fb353-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222463 4873 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222480 4873 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222495 4873 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5e5a79da-a068-4a68-ba79-6719ea0fb353-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222511 4873 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e5a79da-a068-4a68-ba79-6719ea0fb353-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.222536 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqm6\" (UniqueName: \"kubernetes.io/projected/5e5a79da-a068-4a68-ba79-6719ea0fb353-kube-api-access-hdqm6\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.246736 4873 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.325475 4873 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.662603 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5e5a79da-a068-4a68-ba79-6719ea0fb353","Type":"ContainerDied","Data":"2ab42e52f993d6514497f49e7da17659fa93e4ac5da7295a0f0f52c753b83b71"} Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.662966 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab42e52f993d6514497f49e7da17659fa93e4ac5da7295a0f0f52c753b83b71" Feb 19 11:10:31 crc kubenswrapper[4873]: I0219 11:10:31.662638 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 11:10:37 crc kubenswrapper[4873]: I0219 11:10:37.485194 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:10:37 crc kubenswrapper[4873]: E0219 11:10:37.486516 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.092290 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:10:41 crc kubenswrapper[4873]: E0219 11:10:41.093357 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="registry-server" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093373 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="registry-server" Feb 19 11:10:41 crc kubenswrapper[4873]: E0219 11:10:41.093398 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="extract-content" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093406 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="extract-content" Feb 19 11:10:41 crc kubenswrapper[4873]: E0219 11:10:41.093417 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5a79da-a068-4a68-ba79-6719ea0fb353" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093425 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5a79da-a068-4a68-ba79-6719ea0fb353" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:10:41 crc kubenswrapper[4873]: E0219 11:10:41.093465 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="extract-utilities" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093475 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="extract-utilities" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093697 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="84eca575-0570-46eb-9694-8646236d7aba" containerName="registry-server" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.093729 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5a79da-a068-4a68-ba79-6719ea0fb353" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.094534 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.096895 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5bdht" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.102575 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.205536 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwhs\" (UniqueName: \"kubernetes.io/projected/58738a83-0734-4889-9b0e-650e43f6dbb7-kube-api-access-5nwhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.205617 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.308681 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwhs\" (UniqueName: \"kubernetes.io/projected/58738a83-0734-4889-9b0e-650e43f6dbb7-kube-api-access-5nwhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.308880 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.309500 4873 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.332783 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwhs\" (UniqueName: \"kubernetes.io/projected/58738a83-0734-4889-9b0e-650e43f6dbb7-kube-api-access-5nwhs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.343280 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"58738a83-0734-4889-9b0e-650e43f6dbb7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.418937 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.838540 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:10:41 crc kubenswrapper[4873]: I0219 11:10:41.846944 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:10:42 crc kubenswrapper[4873]: I0219 11:10:42.778962 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"58738a83-0734-4889-9b0e-650e43f6dbb7","Type":"ContainerStarted","Data":"eaf32f57e0eb0c863d6da631f84ac1ca33473332d8b0e93ea2bb36f2f78e6202"} Feb 19 11:10:44 crc kubenswrapper[4873]: I0219 11:10:44.159557 4873 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-prw4c" podUID="4cc54252-cfdf-4b71-bfa5-552dcd26500d" containerName="registry-server" probeResult="failure" output=< Feb 19 11:10:44 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:10:44 crc kubenswrapper[4873]: > Feb 19 11:10:44 crc kubenswrapper[4873]: I0219 11:10:44.164635 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-prw4c" podUID="4cc54252-cfdf-4b71-bfa5-552dcd26500d" containerName="registry-server" probeResult="failure" output=< Feb 19 11:10:44 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:10:44 crc kubenswrapper[4873]: > Feb 19 11:10:45 crc kubenswrapper[4873]: I0219 11:10:45.808553 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"58738a83-0734-4889-9b0e-650e43f6dbb7","Type":"ContainerStarted","Data":"23b91139685a6cdb90a8794eb5accb2a7251f0de1685199bd21c49c5f88f9f84"} Feb 19 11:10:45 crc kubenswrapper[4873]: I0219 11:10:45.831452 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.985016231 podStartE2EDuration="4.831415835s" podCreationTimestamp="2026-02-19 11:10:41 +0000 UTC" firstStartedPulling="2026-02-19 11:10:41.84675338 +0000 UTC m=+5151.136185018" lastFinishedPulling="2026-02-19 11:10:44.693152984 +0000 UTC m=+5153.982584622" observedRunningTime="2026-02-19 11:10:45.822378538 +0000 UTC m=+5155.111810176" watchObservedRunningTime="2026-02-19 11:10:45.831415835 +0000 UTC m=+5155.120847513" Feb 19 11:10:48 crc kubenswrapper[4873]: I0219 11:10:48.485307 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:10:48 crc kubenswrapper[4873]: E0219 11:10:48.485922 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:11:01 crc kubenswrapper[4873]: I0219 11:11:01.490547 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:11:01 crc kubenswrapper[4873]: E0219 11:11:01.491356 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.415033 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lms9s/must-gather-lgwst"] Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.417180 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.420584 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lms9s"/"openshift-service-ca.crt" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.420936 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lms9s"/"default-dockercfg-6s6vh" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.422173 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lms9s"/"kube-root-ca.crt" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.431793 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b49\" (UniqueName: \"kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.431938 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.439651 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lms9s/must-gather-lgwst"] Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.536379 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b49\" (UniqueName: \"kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.540224 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.540724 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.555329 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b49\" (UniqueName: \"kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49\") pod \"must-gather-lgwst\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:09 crc kubenswrapper[4873]: I0219 11:11:09.737866 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:11:10 crc kubenswrapper[4873]: I0219 11:11:10.425502 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lms9s/must-gather-lgwst"] Feb 19 11:11:11 crc kubenswrapper[4873]: I0219 11:11:11.103559 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/must-gather-lgwst" event={"ID":"a6f4f1cb-6b34-4940-be18-6ba992fd72d7","Type":"ContainerStarted","Data":"80a48f2e0ac5a77d4e93a660520e3615ef4baebafeaa87193cb6260f26588930"} Feb 19 11:11:16 crc kubenswrapper[4873]: I0219 11:11:16.484620 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:11:16 crc kubenswrapper[4873]: E0219 11:11:16.485438 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:11:17 crc kubenswrapper[4873]: I0219 11:11:17.175795 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/must-gather-lgwst" event={"ID":"a6f4f1cb-6b34-4940-be18-6ba992fd72d7","Type":"ContainerStarted","Data":"d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620"} Feb 19 11:11:17 crc kubenswrapper[4873]: I0219 11:11:17.176353 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/must-gather-lgwst" event={"ID":"a6f4f1cb-6b34-4940-be18-6ba992fd72d7","Type":"ContainerStarted","Data":"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7"} Feb 19 11:11:17 crc kubenswrapper[4873]: I0219 11:11:17.192802 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lms9s/must-gather-lgwst" podStartSLOduration=2.193241735 podStartE2EDuration="8.192783645s" podCreationTimestamp="2026-02-19 11:11:09 +0000 UTC" firstStartedPulling="2026-02-19 11:11:10.444779365 +0000 UTC m=+5179.734211003" lastFinishedPulling="2026-02-19 11:11:16.444321275 +0000 UTC m=+5185.733752913" observedRunningTime="2026-02-19 11:11:17.192417296 +0000 UTC m=+5186.481848934" watchObservedRunningTime="2026-02-19 11:11:17.192783645 +0000 UTC m=+5186.482215293" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.228538 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lms9s/crc-debug-gjmz2"] Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.230479 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.313285 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wrzd\" (UniqueName: \"kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.313736 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.415956 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wrzd\" (UniqueName: \"kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.416061 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.416180 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.438857 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wrzd\" (UniqueName: \"kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd\") pod \"crc-debug-gjmz2\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:22 crc kubenswrapper[4873]: I0219 11:11:22.549580 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:11:23 crc kubenswrapper[4873]: I0219 11:11:23.227713 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" event={"ID":"610332ca-5405-4232-bdf7-e716c30e4e29","Type":"ContainerStarted","Data":"4565dec49529c796afa90816eae8a0e8246af5cab44bb9b4785466235f31f90e"} Feb 19 11:11:30 crc kubenswrapper[4873]: I0219 11:11:30.484495 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:11:30 crc kubenswrapper[4873]: E0219 11:11:30.485337 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:11:35 crc kubenswrapper[4873]: I0219 11:11:35.349938 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" event={"ID":"610332ca-5405-4232-bdf7-e716c30e4e29","Type":"ContainerStarted","Data":"e1e47bb1ff8a672d572432525165620c6fe8cdbb5878839e48d767115053def9"} Feb 19 11:11:35 crc kubenswrapper[4873]: I0219 11:11:35.371269 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" podStartSLOduration=0.986640113 podStartE2EDuration="13.371252908s" podCreationTimestamp="2026-02-19 11:11:22 +0000 UTC" firstStartedPulling="2026-02-19 11:11:22.610977897 +0000 UTC m=+5191.900409535" lastFinishedPulling="2026-02-19 11:11:34.995590692 +0000 UTC m=+5204.285022330" observedRunningTime="2026-02-19 11:11:35.361156305 +0000 UTC m=+5204.650587983" watchObservedRunningTime="2026-02-19 11:11:35.371252908 +0000 UTC m=+5204.660684546" Feb 19 11:11:41 crc kubenswrapper[4873]: I0219 11:11:41.493157 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:11:41 crc kubenswrapper[4873]: E0219 11:11:41.493999 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:11:55 crc kubenswrapper[4873]: I0219 11:11:55.485009 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:11:55 crc kubenswrapper[4873]: E0219 11:11:55.485793 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:12:10 crc kubenswrapper[4873]: I0219 11:12:10.484333 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:12:10 crc kubenswrapper[4873]: E0219 11:12:10.485085 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:12:21 crc kubenswrapper[4873]: I0219 11:12:21.496634 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:12:21 crc kubenswrapper[4873]: E0219 11:12:21.497493 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:12:24 crc kubenswrapper[4873]: I0219 11:12:24.895516 4873 generic.go:334] "Generic (PLEG): container finished" podID="610332ca-5405-4232-bdf7-e716c30e4e29" containerID="e1e47bb1ff8a672d572432525165620c6fe8cdbb5878839e48d767115053def9" exitCode=0 Feb 19 11:12:24 crc kubenswrapper[4873]: I0219 11:12:24.895595 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" event={"ID":"610332ca-5405-4232-bdf7-e716c30e4e29","Type":"ContainerDied","Data":"e1e47bb1ff8a672d572432525165620c6fe8cdbb5878839e48d767115053def9"} Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.609692 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.664567 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-gjmz2"] Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.675249 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-gjmz2"] Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.696607 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wrzd\" (UniqueName: \"kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd\") pod \"610332ca-5405-4232-bdf7-e716c30e4e29\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.696680 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host\") pod \"610332ca-5405-4232-bdf7-e716c30e4e29\" (UID: \"610332ca-5405-4232-bdf7-e716c30e4e29\") " Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.696853 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host" (OuterVolumeSpecName: "host") pod "610332ca-5405-4232-bdf7-e716c30e4e29" (UID: "610332ca-5405-4232-bdf7-e716c30e4e29"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.697420 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/610332ca-5405-4232-bdf7-e716c30e4e29-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.705992 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd" (OuterVolumeSpecName: "kube-api-access-2wrzd") pod "610332ca-5405-4232-bdf7-e716c30e4e29" (UID: "610332ca-5405-4232-bdf7-e716c30e4e29"). InnerVolumeSpecName "kube-api-access-2wrzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.801534 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wrzd\" (UniqueName: \"kubernetes.io/projected/610332ca-5405-4232-bdf7-e716c30e4e29-kube-api-access-2wrzd\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.918025 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4565dec49529c796afa90816eae8a0e8246af5cab44bb9b4785466235f31f90e" Feb 19 11:12:26 crc kubenswrapper[4873]: I0219 11:12:26.918341 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-gjmz2" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.494281 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610332ca-5405-4232-bdf7-e716c30e4e29" path="/var/lib/kubelet/pods/610332ca-5405-4232-bdf7-e716c30e4e29/volumes" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.787516 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lms9s/crc-debug-58t8c"] Feb 19 11:12:27 crc kubenswrapper[4873]: E0219 11:12:27.788047 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610332ca-5405-4232-bdf7-e716c30e4e29" containerName="container-00" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.788065 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="610332ca-5405-4232-bdf7-e716c30e4e29" containerName="container-00" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.788317 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="610332ca-5405-4232-bdf7-e716c30e4e29" containerName="container-00" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.789159 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.936234 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:27 crc kubenswrapper[4873]: I0219 11:12:27.936300 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6j7p\" (UniqueName: \"kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.038199 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.038340 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.038641 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6j7p\" (UniqueName: \"kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.065093 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6j7p\" (UniqueName: \"kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p\") pod \"crc-debug-58t8c\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.115572 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.944871 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-58t8c" event={"ID":"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a","Type":"ContainerStarted","Data":"9881fb46a0d2568f8dddfc8da179e6f97035508c7f4131e1a48dad7a6c2fd139"} Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.945295 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-58t8c" event={"ID":"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a","Type":"ContainerStarted","Data":"c3a9a5b478958c6bcc89f1f4f518abb55bf2fac1e192b323dbd5a87dee63bee8"} Feb 19 11:12:28 crc kubenswrapper[4873]: I0219 11:12:28.978824 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lms9s/crc-debug-58t8c" podStartSLOduration=1.9788021850000002 podStartE2EDuration="1.978802185s" podCreationTimestamp="2026-02-19 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:12:28.95871427 +0000 UTC m=+5258.248145908" watchObservedRunningTime="2026-02-19 11:12:28.978802185 +0000 UTC m=+5258.268233823" Feb 19 11:12:29 crc kubenswrapper[4873]: I0219 11:12:29.954579 4873 generic.go:334] "Generic (PLEG): container finished" podID="ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" containerID="9881fb46a0d2568f8dddfc8da179e6f97035508c7f4131e1a48dad7a6c2fd139" exitCode=0 Feb 19 11:12:29 crc kubenswrapper[4873]: I0219 11:12:29.954634 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-58t8c" event={"ID":"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a","Type":"ContainerDied","Data":"9881fb46a0d2568f8dddfc8da179e6f97035508c7f4131e1a48dad7a6c2fd139"} Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.089205 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.200465 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host\") pod \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.200759 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host" (OuterVolumeSpecName: "host") pod "ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" (UID: "ab4a09e3-7ea1-4764-b0e6-75c1b549b77a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.201340 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6j7p\" (UniqueName: \"kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p\") pod \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\" (UID: \"ab4a09e3-7ea1-4764-b0e6-75c1b549b77a\") " Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.202412 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.224477 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p" (OuterVolumeSpecName: "kube-api-access-b6j7p") pod "ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" (UID: "ab4a09e3-7ea1-4764-b0e6-75c1b549b77a"). InnerVolumeSpecName "kube-api-access-b6j7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.304784 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6j7p\" (UniqueName: \"kubernetes.io/projected/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a-kube-api-access-b6j7p\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.576407 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-58t8c"] Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.585039 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-58t8c"] Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.977136 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a9a5b478958c6bcc89f1f4f518abb55bf2fac1e192b323dbd5a87dee63bee8" Feb 19 11:12:31 crc kubenswrapper[4873]: I0219 11:12:31.977210 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-58t8c" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.769296 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lms9s/crc-debug-6mcmb"] Feb 19 11:12:32 crc kubenswrapper[4873]: E0219 11:12:32.769861 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" containerName="container-00" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.769876 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" containerName="container-00" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.770145 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" containerName="container-00" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.771539 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.838322 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.838792 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbjg\" (UniqueName: \"kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.940905 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.941074 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbbjg\" (UniqueName: \"kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.941084 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:32 crc kubenswrapper[4873]: I0219 11:12:32.962044 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbbjg\" (UniqueName: \"kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg\") pod \"crc-debug-6mcmb\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:33 crc kubenswrapper[4873]: I0219 11:12:33.096010 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:33 crc kubenswrapper[4873]: W0219 11:12:33.142931 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb446129e_ed0f_4d69_b8e6_4080c69ec21b.slice/crio-e3d5e07c91a322a359e34accd84aed9f05e4c4c951146784683e31d5d331c1e8 WatchSource:0}: Error finding container e3d5e07c91a322a359e34accd84aed9f05e4c4c951146784683e31d5d331c1e8: Status 404 returned error can't find the container with id e3d5e07c91a322a359e34accd84aed9f05e4c4c951146784683e31d5d331c1e8 Feb 19 11:12:33 crc kubenswrapper[4873]: I0219 11:12:33.497058 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4a09e3-7ea1-4764-b0e6-75c1b549b77a" path="/var/lib/kubelet/pods/ab4a09e3-7ea1-4764-b0e6-75c1b549b77a/volumes" Feb 19 11:12:33 crc kubenswrapper[4873]: I0219 11:12:33.995401 4873 generic.go:334] "Generic (PLEG): container finished" podID="b446129e-ed0f-4d69-b8e6-4080c69ec21b" containerID="40d299159e87f29a6a7e1c52a9e8d2f2733e3bb33ce23f6b1515f7d41477be8f" exitCode=0 Feb 19 11:12:33 crc kubenswrapper[4873]: I0219 11:12:33.995476 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" event={"ID":"b446129e-ed0f-4d69-b8e6-4080c69ec21b","Type":"ContainerDied","Data":"40d299159e87f29a6a7e1c52a9e8d2f2733e3bb33ce23f6b1515f7d41477be8f"} Feb 19 11:12:33 crc kubenswrapper[4873]: I0219 11:12:33.995723 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" event={"ID":"b446129e-ed0f-4d69-b8e6-4080c69ec21b","Type":"ContainerStarted","Data":"e3d5e07c91a322a359e34accd84aed9f05e4c4c951146784683e31d5d331c1e8"} Feb 19 11:12:34 crc kubenswrapper[4873]: I0219 11:12:34.037183 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-6mcmb"] Feb 19 11:12:34 crc kubenswrapper[4873]: I0219 11:12:34.052408 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lms9s/crc-debug-6mcmb"] Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.511357 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.596504 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host\") pod \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.596641 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host" (OuterVolumeSpecName: "host") pod "b446129e-ed0f-4d69-b8e6-4080c69ec21b" (UID: "b446129e-ed0f-4d69-b8e6-4080c69ec21b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.596686 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbjg\" (UniqueName: \"kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg\") pod \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\" (UID: \"b446129e-ed0f-4d69-b8e6-4080c69ec21b\") " Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.599789 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b446129e-ed0f-4d69-b8e6-4080c69ec21b-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.618325 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg" (OuterVolumeSpecName: "kube-api-access-cbbjg") pod "b446129e-ed0f-4d69-b8e6-4080c69ec21b" (UID: "b446129e-ed0f-4d69-b8e6-4080c69ec21b"). InnerVolumeSpecName "kube-api-access-cbbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:12:35 crc kubenswrapper[4873]: I0219 11:12:35.701433 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbbjg\" (UniqueName: \"kubernetes.io/projected/b446129e-ed0f-4d69-b8e6-4080c69ec21b-kube-api-access-cbbjg\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:36 crc kubenswrapper[4873]: I0219 11:12:36.015312 4873 scope.go:117] "RemoveContainer" containerID="40d299159e87f29a6a7e1c52a9e8d2f2733e3bb33ce23f6b1515f7d41477be8f" Feb 19 11:12:36 crc kubenswrapper[4873]: I0219 11:12:36.015442 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/crc-debug-6mcmb" Feb 19 11:12:36 crc kubenswrapper[4873]: I0219 11:12:36.484490 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:12:36 crc kubenswrapper[4873]: E0219 11:12:36.485253 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:12:37 crc kubenswrapper[4873]: I0219 11:12:37.522656 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b446129e-ed0f-4d69-b8e6-4080c69ec21b" path="/var/lib/kubelet/pods/b446129e-ed0f-4d69-b8e6-4080c69ec21b/volumes" Feb 19 11:12:47 crc kubenswrapper[4873]: I0219 11:12:47.485808 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:12:47 crc kubenswrapper[4873]: E0219 11:12:47.486611 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:12:59 crc kubenswrapper[4873]: I0219 11:12:59.484293 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:12:59 crc kubenswrapper[4873]: E0219 11:12:59.485118 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.353886 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c4d59d6dd-4nh9w_76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3/barbican-api/0.log" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.484471 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:13:10 crc kubenswrapper[4873]: E0219 11:13:10.484744 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.561058 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c4d59d6dd-4nh9w_76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3/barbican-api-log/0.log" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.588349 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-667444df98-tdgw9_9be5e1ee-a214-46ca-a5bf-d1d337848085/barbican-keystone-listener/0.log" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.737572 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-667444df98-tdgw9_9be5e1ee-a214-46ca-a5bf-d1d337848085/barbican-keystone-listener-log/0.log" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.826641 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-596d5556df-fx4q8_fc48b70c-5ab9-4765-a8cd-5985a3f63854/barbican-worker/0.log" Feb 19 11:13:10 crc kubenswrapper[4873]: I0219 11:13:10.856130 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-596d5556df-fx4q8_fc48b70c-5ab9-4765-a8cd-5985a3f63854/barbican-worker-log/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.094539 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r_fb8aa6eb-a92d-47ab-803f-664399242dde/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.407540 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/ceilometer-notification-agent/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.491965 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/ceilometer-central-agent/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.518389 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/proxy-httpd/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.613471 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/sg-core/0.log" Feb 19 11:13:11 crc kubenswrapper[4873]: I0219 11:13:11.861011 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f3dabe51-c676-42bb-936a-d784ee2e565a/cinder-api-log/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.205282 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_312e766d-4086-4bab-bf8f-9a154f1da5b5/probe/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.283094 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f3dabe51-c676-42bb-936a-d784ee2e565a/cinder-api/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.290501 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_312e766d-4086-4bab-bf8f-9a154f1da5b5/cinder-backup/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.488938 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1/cinder-scheduler/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.574321 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1/probe/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.728378 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_717b3122-e7c6-4cbe-8528-4b582dd7adc5/cinder-volume/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.803230 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_717b3122-e7c6-4cbe-8528-4b582dd7adc5/probe/0.log" Feb 19 11:13:12 crc kubenswrapper[4873]: I0219 11:13:12.982144 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8268173a-e7be-4edd-a1e8-bed3486b138e/probe/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.158458 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8268173a-e7be-4edd-a1e8-bed3486b138e/cinder-volume/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.261056 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-snp5b_f0739ccd-765a-42c4-89b4-de6adf188e24/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.365299 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2_40ec1f13-0b91-4c7c-a13e-11e60f55e627/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.501237 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/init/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.746379 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/init/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.879215 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj_ab7d5a49-ac61-4963-8766-1716098f3d4c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:13 crc kubenswrapper[4873]: I0219 11:13:13.912041 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/dnsmasq-dns/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.094090 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_09cfd898-398f-41ae-8c45-1ed215b69683/glance-httpd/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.157323 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_09cfd898-398f-41ae-8c45-1ed215b69683/glance-log/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.332113 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0df7963-e78f-457c-a27f-45c26232cfa7/glance-log/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.350643 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0df7963-e78f-457c-a27f-45c26232cfa7/glance-httpd/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.683565 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6687d9896d-v96j2_fa527f64-6e38-48c2-9927-a319f4579070/horizon/0.log" Feb 19 11:13:14 crc kubenswrapper[4873]: I0219 11:13:14.734810 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6_537c2ac8-0912-4609-ab4e-760060a78d52/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.061938 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s2jwj_4b127e45-b09c-4e11-9423-58f1f51effd4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.350707 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6687d9896d-v96j2_fa527f64-6e38-48c2-9927-a319f4579070/horizon-log/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.618347 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5fcd445c48-xvpw4_ed86f09e-909d-451b-96c0-9b4b7b27eb03/keystone-api/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.650871 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524981-pxsmx_3f08f0c4-870d-4d9a-8a82-ce22827ce779/keystone-cron/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.696633 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_84c63c73-45f3-4d27-a3a3-cbfecd9e1810/kube-state-metrics/0.log" Feb 19 11:13:15 crc kubenswrapper[4873]: I0219 11:13:15.933957 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf_2baa296e-fb37-4d90-a7e4-68f61006e085/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:16 crc kubenswrapper[4873]: I0219 11:13:16.590435 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb_a607f592-ebca-4bf5-9e98-f9e2bc131ff1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:16 crc kubenswrapper[4873]: I0219 11:13:16.814883 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76cc4fb9fc-vdfd4_f168d086-aaa7-4a6e-9a65-5ab28e10a7e8/neutron-httpd/0.log" Feb 19 11:13:16 crc kubenswrapper[4873]: I0219 11:13:16.940343 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76cc4fb9fc-vdfd4_f168d086-aaa7-4a6e-9a65-5ab28e10a7e8/neutron-api/0.log" Feb 19 11:13:17 crc kubenswrapper[4873]: I0219 11:13:17.059810 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/setup-container/0.log" Feb 19 11:13:17 crc kubenswrapper[4873]: I0219 11:13:17.133552 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/setup-container/0.log" Feb 19 11:13:17 crc kubenswrapper[4873]: I0219 11:13:17.297631 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/rabbitmq/0.log" Feb 19 11:13:17 crc kubenswrapper[4873]: I0219 11:13:17.902163 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c25b9f1f-0533-4e00-a926-08639b1b2266/nova-cell0-conductor-conductor/0.log" Feb 19 11:13:18 crc kubenswrapper[4873]: I0219 11:13:18.523218 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0688136a-f0b5-4a2a-8f08-9c99d9c3644c/nova-cell1-conductor-conductor/0.log" Feb 19 11:13:18 crc kubenswrapper[4873]: I0219 11:13:18.937379 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cf46452a-f49d-48ab-a235-9e96f89c931f/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 11:13:19 crc kubenswrapper[4873]: I0219 11:13:19.036953 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4f4e613e-0a31-4191-9afb-4fd0300586f9/nova-api-log/0.log" Feb 19 11:13:19 crc kubenswrapper[4873]: I0219 11:13:19.248666 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v25t6_ce5f426d-554a-469a-be1e-e3e1b9bfa68e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:19 crc kubenswrapper[4873]: I0219 11:13:19.381518 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15cbab3c-9843-4bf6-b0e8-b65dec1e5112/nova-metadata-log/0.log" Feb 19 11:13:19 crc kubenswrapper[4873]: I0219 11:13:19.486545 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4f4e613e-0a31-4191-9afb-4fd0300586f9/nova-api-api/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.086219 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/mysql-bootstrap/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.231221 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_21bb5d7d-6565-484a-af2d-0edcff2729b3/memcached/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.237804 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_adb0395e-00f8-4bc6-a0a6-2b956235c58c/nova-scheduler-scheduler/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.306932 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/galera/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.369594 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/mysql-bootstrap/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.527751 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/mysql-bootstrap/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.846286 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/mysql-bootstrap/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.868146 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/galera/0.log" Feb 19 11:13:20 crc kubenswrapper[4873]: I0219 11:13:20.888674 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5c4eb2b5-d272-49ff-938e-3e3359d29f46/openstackclient/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.082948 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-djxfb_888c3336-cd8a-4bf2-805f-6b473fb272f4/openstack-network-exporter/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.198684 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server-init/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.351979 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15cbab3c-9843-4bf6-b0e8-b65dec1e5112/nova-metadata-metadata/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.427615 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server-init/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.451415 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.644253 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovs-vswitchd/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.649481 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vsnt5_b0ab9d21-0c11-4940-ad43-3e20c46012ad/ovn-controller/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.757803 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dks5c_f5d576b5-56dd-4f9f-b67b-0ee87213ea78/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.863636 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd6df8e5-8bc5-4bd5-b466-a90642932cc2/openstack-network-exporter/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.871826 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd6df8e5-8bc5-4bd5-b466-a90642932cc2/ovn-northd/0.log" Feb 19 11:13:21 crc kubenswrapper[4873]: I0219 11:13:21.994496 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4574f6e3-d697-424c-a9f1-7b74afb82324/openstack-network-exporter/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.082375 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4574f6e3-d697-424c-a9f1-7b74afb82324/ovsdbserver-nb/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.090538 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_877efa5f-4357-4396-8805-729237cd4e8f/openstack-network-exporter/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.217738 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_877efa5f-4357-4396-8805-729237cd4e8f/ovsdbserver-sb/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.475984 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6696d67b98-wrvnm_c5d4dde9-793b-403e-8701-84cca6a509e1/placement-api/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.516399 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/init-config-reloader/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.539711 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6696d67b98-wrvnm_c5d4dde9-793b-403e-8701-84cca6a509e1/placement-log/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.763565 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/init-config-reloader/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.785753 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/config-reloader/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.810381 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/thanos-sidecar/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.812629 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/prometheus/0.log" Feb 19 11:13:22 crc kubenswrapper[4873]: I0219 11:13:22.966479 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/setup-container/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.140410 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/setup-container/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.208259 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/setup-container/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.227085 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/rabbitmq/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.414350 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/setup-container/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.484705 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj_157ee933-b692-4c92-bcbd-967bc1cd377c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.528638 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/rabbitmq/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.908227 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn_fda37ba3-82f5-4d49-a15f-4dca53649ec7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:23 crc kubenswrapper[4873]: I0219 11:13:23.920080 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mt2n6_3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.060201 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5wvjf_7843f72c-5559-44d6-86e0-62f013e0a073/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.130410 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sw8hj_15999617-f2b4-4a3f-911d-422db799fa37/ssh-known-hosts-edpm-deployment/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.343584 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c6d694569-qbpxm_d51beb70-e455-4e75-9e06-863b41fbf9a8/proxy-server/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.416932 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c6d694569-qbpxm_d51beb70-e455-4e75-9e06-863b41fbf9a8/proxy-httpd/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.485866 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:13:24 crc kubenswrapper[4873]: E0219 11:13:24.486626 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.545349 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mx6qq_91fbca18-847d-4e7b-8a40-e52dd348d155/swift-ring-rebalance/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.592383 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-auditor/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.709075 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-reaper/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.758000 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-replicator/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.799372 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-server/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.824496 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-auditor/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.910203 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-replicator/0.log" Feb 19 11:13:24 crc kubenswrapper[4873]: I0219 11:13:24.930754 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-server/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.030030 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-updater/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.122775 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-expirer/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.128243 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-auditor/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.240074 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-server/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.252343 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-replicator/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.273402 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-updater/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.357028 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/swift-recon-cron/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.358193 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/rsync/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.530080 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz_bf143721-2963-4009-8e23-0c283b4a88a3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.618926 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e5a79da-a068-4a68-ba79-6719ea0fb353/tempest-tests-tempest-tests-runner/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.752338 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_58738a83-0734-4889-9b0e-650e43f6dbb7/test-operator-logs-container/0.log" Feb 19 11:13:25 crc kubenswrapper[4873]: I0219 11:13:25.819171 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh_28f40398-582f-40ed-92b8-2ff5a19d138d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:13:26 crc kubenswrapper[4873]: I0219 11:13:26.456858 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_3d0e231c-7848-4f57-a28b-dfec3c87b617/watcher-applier/0.log" Feb 19 11:13:26 crc kubenswrapper[4873]: I0219 11:13:26.996683 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9fb835f9-7ac4-4212-a372-b793c2fb8afd/watcher-api-log/0.log" Feb 19 11:13:29 crc kubenswrapper[4873]: I0219 11:13:29.785515 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_3ecf8671-28f5-4549-a4c1-0cdad8421837/watcher-decision-engine/0.log" Feb 19 11:13:30 crc kubenswrapper[4873]: I0219 11:13:30.370157 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9fb835f9-7ac4-4212-a372-b793c2fb8afd/watcher-api/0.log" Feb 19 11:13:38 crc kubenswrapper[4873]: I0219 11:13:38.484898 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:13:38 crc kubenswrapper[4873]: E0219 11:13:38.486705 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:13:52 crc kubenswrapper[4873]: I0219 11:13:52.484133 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:13:52 crc kubenswrapper[4873]: I0219 11:13:52.790797 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543"} Feb 19 11:13:56 crc kubenswrapper[4873]: I0219 11:13:56.557346 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:13:56 crc kubenswrapper[4873]: I0219 11:13:56.720025 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:13:56 crc kubenswrapper[4873]: I0219 11:13:56.751830 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:13:56 crc kubenswrapper[4873]: I0219 11:13:56.799458 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:13:56 crc kubenswrapper[4873]: I0219 11:13:56.994591 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:13:57 crc kubenswrapper[4873]: I0219 11:13:57.032841 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:13:57 crc kubenswrapper[4873]: I0219 11:13:57.050161 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/extract/0.log" Feb 19 11:13:57 crc kubenswrapper[4873]: I0219 11:13:57.475668 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-t54x9_f108f6ea-4506-48bf-b948-e367078c3dce/manager/0.log" Feb 19 11:13:57 crc kubenswrapper[4873]: I0219 11:13:57.819333 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-vgxsl_43531003-74d3-43b9-b0f5-6fca42b21975/manager/0.log" Feb 19 11:13:58 crc kubenswrapper[4873]: I0219 11:13:58.419711 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-vwx5n_8d4b6c84-e5ed-4761-b7c7-95b21da856f7/manager/0.log" Feb 19 11:13:58 crc kubenswrapper[4873]: I0219 11:13:58.708397 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-r9b5b_2b1c8872-b310-4994-819c-a8e472d8e522/manager/0.log" Feb 19 11:13:59 crc kubenswrapper[4873]: I0219 11:13:59.223016 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-f86jr_aeccf47e-b953-4036-b271-be284b9ab385/manager/0.log" Feb 19 11:13:59 crc kubenswrapper[4873]: I0219 11:13:59.660269 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-4t46s_3ff0155f-08fd-42f5-9b31-c3b9a7cefefe/manager/0.log" Feb 19 11:13:59 crc kubenswrapper[4873]: I0219 11:13:59.752862 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-cx7xf_2e7ca3f2-f73b-4bac-93bb-68b2518d956e/manager/0.log" Feb 19 11:14:00 crc kubenswrapper[4873]: I0219 11:14:00.489012 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-t2hfl_e4172fa9-b04e-4894-82d6-ec65ea92b004/manager/0.log" Feb 19 11:14:00 crc kubenswrapper[4873]: I0219 11:14:00.499267 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-t7mwr_ecf3484a-026e-4655-bfa8-e5292e2f62c5/manager/0.log" Feb 19 11:14:00 crc kubenswrapper[4873]: I0219 11:14:00.735585 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8v7q6_588098b3-662f-4f6f-914c-8cb28e055ccd/manager/0.log" Feb 19 11:14:00 crc kubenswrapper[4873]: I0219 11:14:00.905480 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-d6h72_c471d099-fa02-4463-9eb9-9d0f6a3832e6/manager/0.log" Feb 19 11:14:01 crc kubenswrapper[4873]: I0219 11:14:01.026678 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-n6djt_8eec8859-f388-4d81-bbce-0433a66a1ef7/manager/0.log" Feb 19 11:14:01 crc kubenswrapper[4873]: I0219 11:14:01.283867 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv_515c6c0c-ae00-4ae1-ab3f-e22e5a585681/manager/0.log" Feb 19 11:14:01 crc kubenswrapper[4873]: I0219 11:14:01.564001 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8476bb6847-rv4sx_e18b6851-e022-488e-bd95-27d1659f2761/operator/0.log" Feb 19 11:14:01 crc kubenswrapper[4873]: I0219 11:14:01.746087 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p62rb_0144fe1c-ef13-4b4e-8cda-ddc72e2516bb/registry-server/0.log" Feb 19 11:14:02 crc kubenswrapper[4873]: I0219 11:14:02.052786 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-db4dr_dc53742c-7e71-49fa-9378-b26036c80275/manager/0.log" Feb 19 11:14:02 crc kubenswrapper[4873]: I0219 11:14:02.280613 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-6hpwv_74e9952e-50ef-4389-aa77-8f6e9cc790a8/manager/0.log" Feb 19 11:14:02 crc kubenswrapper[4873]: I0219 11:14:02.607997 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lcnz4_9574bff7-0aac-4a24-b69f-135ff968422e/operator/0.log" Feb 19 11:14:02 crc kubenswrapper[4873]: I0219 11:14:02.964524 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-r74rt_1f098ace-bbc4-46ee-8e72-ab65a59851eb/manager/0.log" Feb 19 11:14:03 crc kubenswrapper[4873]: I0219 11:14:03.557665 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-2szzj_e139553a-a68d-424d-95b5-9093ea05440b/manager/0.log" Feb 19 11:14:03 crc kubenswrapper[4873]: I0219 11:14:03.608857 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-g22tc_0e9da99c-56ee-4353-9378-c59a2c4e1608/manager/0.log" Feb 19 11:14:03 crc kubenswrapper[4873]: I0219 11:14:03.969043 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7d767c64df-hld6w_e827e28d-ffd8-4f59-82bf-a6db1dab5413/manager/0.log" Feb 19 11:14:04 crc kubenswrapper[4873]: I0219 11:14:04.088590 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77c7c45f98-q8khx_26f0a6ea-18fb-411a-b193-83938a4bbe19/manager/0.log" Feb 19 11:14:04 crc kubenswrapper[4873]: I0219 11:14:04.248123 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-t9kgf_080befba-c501-4f84-8644-6b9fda0d8d5f/manager/0.log" Feb 19 11:14:09 crc kubenswrapper[4873]: I0219 11:14:09.876984 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-hqmvw_d53d2bae-fcdd-408c-9950-440e841cc035/manager/0.log" Feb 19 11:14:12 crc kubenswrapper[4873]: I0219 11:14:12.899651 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:12 crc kubenswrapper[4873]: E0219 11:14:12.900786 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b446129e-ed0f-4d69-b8e6-4080c69ec21b" containerName="container-00" Feb 19 11:14:12 crc kubenswrapper[4873]: I0219 11:14:12.900803 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b446129e-ed0f-4d69-b8e6-4080c69ec21b" containerName="container-00" Feb 19 11:14:12 crc kubenswrapper[4873]: I0219 11:14:12.901043 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b446129e-ed0f-4d69-b8e6-4080c69ec21b" containerName="container-00" Feb 19 11:14:12 crc kubenswrapper[4873]: I0219 11:14:12.909544 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:12 crc kubenswrapper[4873]: I0219 11:14:12.915361 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.061496 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbjg\" (UniqueName: \"kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.061689 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.061725 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.163927 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.164251 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.164517 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.164647 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.164839 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbjg\" (UniqueName: \"kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.186837 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbjg\" (UniqueName: \"kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg\") pod \"redhat-marketplace-9rg97\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.235760 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:13 crc kubenswrapper[4873]: W0219 11:14:13.927842 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78dc4897_272b_47c0_a914_f67474646b69.slice/crio-bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d WatchSource:0}: Error finding container bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d: Status 404 returned error can't find the container with id bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d Feb 19 11:14:13 crc kubenswrapper[4873]: I0219 11:14:13.941843 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:14 crc kubenswrapper[4873]: I0219 11:14:14.021888 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerStarted","Data":"bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d"} Feb 19 11:14:15 crc kubenswrapper[4873]: I0219 11:14:15.033722 4873 generic.go:334] "Generic (PLEG): container finished" podID="78dc4897-272b-47c0-a914-f67474646b69" containerID="b30980f32b2645748e6c10b7a5e9191a7ed287e91ec99d6202bbad12353be541" exitCode=0 Feb 19 11:14:15 crc kubenswrapper[4873]: I0219 11:14:15.034026 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerDied","Data":"b30980f32b2645748e6c10b7a5e9191a7ed287e91ec99d6202bbad12353be541"} Feb 19 11:14:16 crc kubenswrapper[4873]: I0219 11:14:16.042997 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerStarted","Data":"981c3b34f5f254eaaf4de449435114b331f5b9ed7452571e8eef21054d499734"} Feb 19 11:14:17 crc kubenswrapper[4873]: I0219 11:14:17.055309 4873 generic.go:334] "Generic (PLEG): container finished" podID="78dc4897-272b-47c0-a914-f67474646b69" containerID="981c3b34f5f254eaaf4de449435114b331f5b9ed7452571e8eef21054d499734" exitCode=0 Feb 19 11:14:17 crc kubenswrapper[4873]: I0219 11:14:17.055398 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerDied","Data":"981c3b34f5f254eaaf4de449435114b331f5b9ed7452571e8eef21054d499734"} Feb 19 11:14:18 crc kubenswrapper[4873]: I0219 11:14:18.066901 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerStarted","Data":"97f52c7b6a3d35760dce1e3eea42c096bc9ea72133b589b617ac83afe8756696"} Feb 19 11:14:18 crc kubenswrapper[4873]: I0219 11:14:18.110134 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9rg97" podStartSLOduration=3.511158011 podStartE2EDuration="6.110114632s" podCreationTimestamp="2026-02-19 11:14:12 +0000 UTC" firstStartedPulling="2026-02-19 11:14:15.036551567 +0000 UTC m=+5364.325983205" lastFinishedPulling="2026-02-19 11:14:17.635508188 +0000 UTC m=+5366.924939826" observedRunningTime="2026-02-19 11:14:18.095222038 +0000 UTC m=+5367.384653686" watchObservedRunningTime="2026-02-19 11:14:18.110114632 +0000 UTC m=+5367.399546270" Feb 19 11:14:23 crc kubenswrapper[4873]: I0219 11:14:23.236589 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:23 crc kubenswrapper[4873]: I0219 11:14:23.237317 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:23 crc kubenswrapper[4873]: I0219 11:14:23.285407 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:24 crc kubenswrapper[4873]: I0219 11:14:24.174692 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:24 crc kubenswrapper[4873]: I0219 11:14:24.233715 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:26 crc kubenswrapper[4873]: I0219 11:14:26.140302 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9rg97" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="registry-server" containerID="cri-o://97f52c7b6a3d35760dce1e3eea42c096bc9ea72133b589b617ac83afe8756696" gracePeriod=2 Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.150772 4873 generic.go:334] "Generic (PLEG): container finished" podID="78dc4897-272b-47c0-a914-f67474646b69" containerID="97f52c7b6a3d35760dce1e3eea42c096bc9ea72133b589b617ac83afe8756696" exitCode=0 Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.150817 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerDied","Data":"97f52c7b6a3d35760dce1e3eea42c096bc9ea72133b589b617ac83afe8756696"} Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.151377 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rg97" event={"ID":"78dc4897-272b-47c0-a914-f67474646b69","Type":"ContainerDied","Data":"bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d"} Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.151394 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba085bf06b9aa2ef5767b63836e8944be40006b10c41e1a9e0f78dc8336e33d" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.237080 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.384072 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities\") pod \"78dc4897-272b-47c0-a914-f67474646b69\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.384145 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzbjg\" (UniqueName: \"kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg\") pod \"78dc4897-272b-47c0-a914-f67474646b69\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.384257 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content\") pod \"78dc4897-272b-47c0-a914-f67474646b69\" (UID: \"78dc4897-272b-47c0-a914-f67474646b69\") " Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.385145 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities" (OuterVolumeSpecName: "utilities") pod "78dc4897-272b-47c0-a914-f67474646b69" (UID: "78dc4897-272b-47c0-a914-f67474646b69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.385806 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.398615 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg" (OuterVolumeSpecName: "kube-api-access-qzbjg") pod "78dc4897-272b-47c0-a914-f67474646b69" (UID: "78dc4897-272b-47c0-a914-f67474646b69"). InnerVolumeSpecName "kube-api-access-qzbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.400671 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s67xb_d639ff25-343e-4e7c-bd2e-f5fc533923f4/control-plane-machine-set-operator/0.log" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.405835 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78dc4897-272b-47c0-a914-f67474646b69" (UID: "78dc4897-272b-47c0-a914-f67474646b69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.454703 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k627b_df659e7d-39ab-41ee-8df5-08896976666c/kube-rbac-proxy/0.log" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.487931 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzbjg\" (UniqueName: \"kubernetes.io/projected/78dc4897-272b-47c0-a914-f67474646b69-kube-api-access-qzbjg\") on node \"crc\" DevicePath \"\"" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.487963 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78dc4897-272b-47c0-a914-f67474646b69-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:14:27 crc kubenswrapper[4873]: I0219 11:14:27.581317 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k627b_df659e7d-39ab-41ee-8df5-08896976666c/machine-api-operator/0.log" Feb 19 11:14:28 crc kubenswrapper[4873]: I0219 11:14:28.167054 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rg97" Feb 19 11:14:28 crc kubenswrapper[4873]: I0219 11:14:28.195602 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:28 crc kubenswrapper[4873]: I0219 11:14:28.207852 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rg97"] Feb 19 11:14:29 crc kubenswrapper[4873]: I0219 11:14:29.500017 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78dc4897-272b-47c0-a914-f67474646b69" path="/var/lib/kubelet/pods/78dc4897-272b-47c0-a914-f67474646b69/volumes" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.341463 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:14:33 crc kubenswrapper[4873]: E0219 11:14:33.342255 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="registry-server" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.342266 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="registry-server" Feb 19 11:14:33 crc kubenswrapper[4873]: E0219 11:14:33.342286 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="extract-utilities" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.342292 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="extract-utilities" Feb 19 11:14:33 crc kubenswrapper[4873]: E0219 11:14:33.342313 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="extract-content" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.342346 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="extract-content" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.342548 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="78dc4897-272b-47c0-a914-f67474646b69" containerName="registry-server" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.344217 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.404534 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.413966 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.414144 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqbp6\" (UniqueName: \"kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.414278 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.516495 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.516619 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqbp6\" (UniqueName: \"kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.516671 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.517126 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.517231 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.543164 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqbp6\" (UniqueName: \"kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6\") pod \"redhat-operators-9vrv5\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:33 crc kubenswrapper[4873]: I0219 11:14:33.699913 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:34 crc kubenswrapper[4873]: I0219 11:14:34.207585 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:14:35 crc kubenswrapper[4873]: I0219 11:14:35.226344 4873 generic.go:334] "Generic (PLEG): container finished" podID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerID="d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248" exitCode=0 Feb 19 11:14:35 crc kubenswrapper[4873]: I0219 11:14:35.226403 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerDied","Data":"d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248"} Feb 19 11:14:35 crc kubenswrapper[4873]: I0219 11:14:35.226448 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerStarted","Data":"dea05a1dfe955bca77ebea3dc2661c6c66f8b364f16daacd38376cbfa5a9555b"} Feb 19 11:14:36 crc kubenswrapper[4873]: I0219 11:14:36.237834 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerStarted","Data":"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b"} Feb 19 11:14:41 crc kubenswrapper[4873]: I0219 11:14:41.303031 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ckd42_51fc361b-11a5-480a-a5b9-0eb4b7670e83/cert-manager-controller/0.log" Feb 19 11:14:41 crc kubenswrapper[4873]: I0219 11:14:41.360766 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-zhqgv_084c90b4-3270-4f64-8c8c-1a96f05dc1fa/cert-manager-cainjector/0.log" Feb 19 11:14:41 crc kubenswrapper[4873]: I0219 11:14:41.447793 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-fhd9c_2eebe311-368b-45b4-9e74-7442221e3785/cert-manager-webhook/0.log" Feb 19 11:14:42 crc kubenswrapper[4873]: I0219 11:14:42.324871 4873 generic.go:334] "Generic (PLEG): container finished" podID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerID="119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b" exitCode=0 Feb 19 11:14:42 crc kubenswrapper[4873]: I0219 11:14:42.324990 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerDied","Data":"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b"} Feb 19 11:14:43 crc kubenswrapper[4873]: I0219 11:14:43.335635 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerStarted","Data":"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9"} Feb 19 11:14:43 crc kubenswrapper[4873]: I0219 11:14:43.700921 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:43 crc kubenswrapper[4873]: I0219 11:14:43.700988 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:14:44 crc kubenswrapper[4873]: I0219 11:14:44.782587 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vrv5" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" probeResult="failure" output=< Feb 19 11:14:44 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:14:44 crc kubenswrapper[4873]: > Feb 19 11:14:54 crc kubenswrapper[4873]: I0219 11:14:54.749147 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vrv5" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" probeResult="failure" output=< Feb 19 11:14:54 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:14:54 crc kubenswrapper[4873]: > Feb 19 11:14:54 crc kubenswrapper[4873]: I0219 11:14:54.778193 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-9cr2m_9b3c6348-1c17-4774-9739-7a1dd3021d81/nmstate-console-plugin/0.log" Feb 19 11:14:54 crc kubenswrapper[4873]: I0219 11:14:54.944559 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-75txf_62408ce4-73ce-4726-91c1-96f645c39dee/nmstate-handler/0.log" Feb 19 11:14:55 crc kubenswrapper[4873]: I0219 11:14:55.007477 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8jgss_3b960434-ef37-45ae-aa50-8d719c8e2df5/kube-rbac-proxy/0.log" Feb 19 11:14:55 crc kubenswrapper[4873]: I0219 11:14:55.094330 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8jgss_3b960434-ef37-45ae-aa50-8d719c8e2df5/nmstate-metrics/0.log" Feb 19 11:14:55 crc kubenswrapper[4873]: I0219 11:14:55.154454 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-qlgxw_f7f28c8a-4571-485c-96a2-fc1c5856e3ea/nmstate-operator/0.log" Feb 19 11:14:55 crc kubenswrapper[4873]: I0219 11:14:55.294711 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nfh8w_7af074a2-c1f7-4253-8efc-065748e0452b/nmstate-webhook/0.log" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.156162 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vrv5" podStartSLOduration=19.626669921 podStartE2EDuration="27.156139717s" podCreationTimestamp="2026-02-19 11:14:33 +0000 UTC" firstStartedPulling="2026-02-19 11:14:35.228698426 +0000 UTC m=+5384.518130064" lastFinishedPulling="2026-02-19 11:14:42.758168222 +0000 UTC m=+5392.047599860" observedRunningTime="2026-02-19 11:14:43.363361582 +0000 UTC m=+5392.652793220" watchObservedRunningTime="2026-02-19 11:15:00.156139717 +0000 UTC m=+5409.445571355" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.164001 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm"] Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.165604 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.167874 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.169602 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.179875 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm"] Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.233382 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.233555 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7v7\" (UniqueName: \"kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.233673 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.335691 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s7v7\" (UniqueName: \"kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.336170 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.336379 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.337543 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.344780 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.357413 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s7v7\" (UniqueName: \"kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7\") pod \"collect-profiles-29524995-hsrvm\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.493671 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:00 crc kubenswrapper[4873]: I0219 11:15:00.983243 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm"] Feb 19 11:15:01 crc kubenswrapper[4873]: I0219 11:15:01.503446 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" event={"ID":"b012f5e4-512b-4887-87c4-3b1d54b23599","Type":"ContainerStarted","Data":"71733d284d7a661e5788ad2a99d92730f3b60d0cc2e3069cd9a68e27bc610261"} Feb 19 11:15:01 crc kubenswrapper[4873]: I0219 11:15:01.503491 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" event={"ID":"b012f5e4-512b-4887-87c4-3b1d54b23599","Type":"ContainerStarted","Data":"667382952d4ebc7834562e45505740a4d0b603b4c0f3b51e63777f81a57296a9"} Feb 19 11:15:01 crc kubenswrapper[4873]: I0219 11:15:01.539828 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" podStartSLOduration=1.539809025 podStartE2EDuration="1.539809025s" podCreationTimestamp="2026-02-19 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:15:01.533864346 +0000 UTC m=+5410.823295984" watchObservedRunningTime="2026-02-19 11:15:01.539809025 +0000 UTC m=+5410.829240663" Feb 19 11:15:02 crc kubenswrapper[4873]: I0219 11:15:02.502969 4873 generic.go:334] "Generic (PLEG): container finished" podID="b012f5e4-512b-4887-87c4-3b1d54b23599" containerID="71733d284d7a661e5788ad2a99d92730f3b60d0cc2e3069cd9a68e27bc610261" exitCode=0 Feb 19 11:15:02 crc kubenswrapper[4873]: I0219 11:15:02.503013 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" event={"ID":"b012f5e4-512b-4887-87c4-3b1d54b23599","Type":"ContainerDied","Data":"71733d284d7a661e5788ad2a99d92730f3b60d0cc2e3069cd9a68e27bc610261"} Feb 19 11:15:03 crc kubenswrapper[4873]: I0219 11:15:03.761159 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:15:03 crc kubenswrapper[4873]: I0219 11:15:03.833542 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:15:03 crc kubenswrapper[4873]: I0219 11:15:03.899680 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.019081 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume\") pod \"b012f5e4-512b-4887-87c4-3b1d54b23599\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.019279 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s7v7\" (UniqueName: \"kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7\") pod \"b012f5e4-512b-4887-87c4-3b1d54b23599\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.019313 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume\") pod \"b012f5e4-512b-4887-87c4-3b1d54b23599\" (UID: \"b012f5e4-512b-4887-87c4-3b1d54b23599\") " Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.019830 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume" (OuterVolumeSpecName: "config-volume") pod "b012f5e4-512b-4887-87c4-3b1d54b23599" (UID: "b012f5e4-512b-4887-87c4-3b1d54b23599"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.027319 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b012f5e4-512b-4887-87c4-3b1d54b23599" (UID: "b012f5e4-512b-4887-87c4-3b1d54b23599"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.027328 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7" (OuterVolumeSpecName: "kube-api-access-5s7v7") pod "b012f5e4-512b-4887-87c4-3b1d54b23599" (UID: "b012f5e4-512b-4887-87c4-3b1d54b23599"). InnerVolumeSpecName "kube-api-access-5s7v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.121698 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s7v7\" (UniqueName: \"kubernetes.io/projected/b012f5e4-512b-4887-87c4-3b1d54b23599-kube-api-access-5s7v7\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.121731 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b012f5e4-512b-4887-87c4-3b1d54b23599-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.121741 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b012f5e4-512b-4887-87c4-3b1d54b23599-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.522964 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" event={"ID":"b012f5e4-512b-4887-87c4-3b1d54b23599","Type":"ContainerDied","Data":"667382952d4ebc7834562e45505740a4d0b603b4c0f3b51e63777f81a57296a9"} Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.523016 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="667382952d4ebc7834562e45505740a4d0b603b4c0f3b51e63777f81a57296a9" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.523052 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-hsrvm" Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.543717 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.599046 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k"] Feb 19 11:15:04 crc kubenswrapper[4873]: I0219 11:15:04.610176 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-pp89k"] Feb 19 11:15:05 crc kubenswrapper[4873]: I0219 11:15:05.505232 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e250d05-a293-4a3c-8658-99d1ae2dc894" path="/var/lib/kubelet/pods/9e250d05-a293-4a3c-8658-99d1ae2dc894/volumes" Feb 19 11:15:05 crc kubenswrapper[4873]: I0219 11:15:05.530570 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vrv5" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" containerID="cri-o://3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9" gracePeriod=2 Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.219406 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.264814 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities\") pod \"3aa970ec-fd76-4bab-a561-14756fefbdd1\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.265082 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqbp6\" (UniqueName: \"kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6\") pod \"3aa970ec-fd76-4bab-a561-14756fefbdd1\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.265149 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content\") pod \"3aa970ec-fd76-4bab-a561-14756fefbdd1\" (UID: \"3aa970ec-fd76-4bab-a561-14756fefbdd1\") " Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.265555 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities" (OuterVolumeSpecName: "utilities") pod "3aa970ec-fd76-4bab-a561-14756fefbdd1" (UID: "3aa970ec-fd76-4bab-a561-14756fefbdd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.284684 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6" (OuterVolumeSpecName: "kube-api-access-kqbp6") pod "3aa970ec-fd76-4bab-a561-14756fefbdd1" (UID: "3aa970ec-fd76-4bab-a561-14756fefbdd1"). InnerVolumeSpecName "kube-api-access-kqbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.367812 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqbp6\" (UniqueName: \"kubernetes.io/projected/3aa970ec-fd76-4bab-a561-14756fefbdd1-kube-api-access-kqbp6\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.367850 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.392254 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aa970ec-fd76-4bab-a561-14756fefbdd1" (UID: "3aa970ec-fd76-4bab-a561-14756fefbdd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.469767 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa970ec-fd76-4bab-a561-14756fefbdd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.540397 4873 generic.go:334] "Generic (PLEG): container finished" podID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerID="3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9" exitCode=0 Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.540437 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerDied","Data":"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9"} Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.540462 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vrv5" event={"ID":"3aa970ec-fd76-4bab-a561-14756fefbdd1","Type":"ContainerDied","Data":"dea05a1dfe955bca77ebea3dc2661c6c66f8b364f16daacd38376cbfa5a9555b"} Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.540481 4873 scope.go:117] "RemoveContainer" containerID="3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.540490 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vrv5" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.561209 4873 scope.go:117] "RemoveContainer" containerID="119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.585174 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.594029 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vrv5"] Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.606039 4873 scope.go:117] "RemoveContainer" containerID="d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.658818 4873 scope.go:117] "RemoveContainer" containerID="3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9" Feb 19 11:15:06 crc kubenswrapper[4873]: E0219 11:15:06.659767 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9\": container with ID starting with 3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9 not found: ID does not exist" containerID="3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.659805 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9"} err="failed to get container status \"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9\": rpc error: code = NotFound desc = could not find container \"3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9\": container with ID starting with 3876019e05c868470eef8302abe3da2e5159d53aa9c32b37cb60972c4e6520b9 not found: ID does not exist" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.659857 4873 scope.go:117] "RemoveContainer" containerID="119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b" Feb 19 11:15:06 crc kubenswrapper[4873]: E0219 11:15:06.660215 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b\": container with ID starting with 119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b not found: ID does not exist" containerID="119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.660241 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b"} err="failed to get container status \"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b\": rpc error: code = NotFound desc = could not find container \"119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b\": container with ID starting with 119594e8ef0e26304e1384501bf3399b006a307679a417dd104d0e34ceace31b not found: ID does not exist" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.660259 4873 scope.go:117] "RemoveContainer" containerID="d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248" Feb 19 11:15:06 crc kubenswrapper[4873]: E0219 11:15:06.660516 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248\": container with ID starting with d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248 not found: ID does not exist" containerID="d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248" Feb 19 11:15:06 crc kubenswrapper[4873]: I0219 11:15:06.660545 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248"} err="failed to get container status \"d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248\": rpc error: code = NotFound desc = could not find container \"d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248\": container with ID starting with d87b5266055a760854f04cd5355a22ae3b05d93504a1ddb696329e751f4db248 not found: ID does not exist" Feb 19 11:15:07 crc kubenswrapper[4873]: I0219 11:15:07.500067 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" path="/var/lib/kubelet/pods/3aa970ec-fd76-4bab-a561-14756fefbdd1/volumes" Feb 19 11:15:08 crc kubenswrapper[4873]: I0219 11:15:08.858870 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7nww_5d79d4d8-e595-4aec-bc0b-7347b826c257/prometheus-operator/0.log" Feb 19 11:15:09 crc kubenswrapper[4873]: I0219 11:15:09.056379 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7_3180318c-7d9a-454b-8de4-887fabae362b/prometheus-operator-admission-webhook/0.log" Feb 19 11:15:09 crc kubenswrapper[4873]: I0219 11:15:09.084877 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-qptdb_4724c979-0040-4017-86ce-78d2a8bdb44e/prometheus-operator-admission-webhook/0.log" Feb 19 11:15:09 crc kubenswrapper[4873]: I0219 11:15:09.255080 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7wtlv_b23281d2-935e-47c1-bc83-8d00c7649625/operator/0.log" Feb 19 11:15:09 crc kubenswrapper[4873]: I0219 11:15:09.302234 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8sflg_ea1cc2c7-c932-4b3d-b718-d017eb06163f/perses-operator/0.log" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.827474 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:21 crc kubenswrapper[4873]: E0219 11:15:21.828582 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="extract-content" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828598 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="extract-content" Feb 19 11:15:21 crc kubenswrapper[4873]: E0219 11:15:21.828634 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="extract-utilities" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828640 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="extract-utilities" Feb 19 11:15:21 crc kubenswrapper[4873]: E0219 11:15:21.828651 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b012f5e4-512b-4887-87c4-3b1d54b23599" containerName="collect-profiles" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828657 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="b012f5e4-512b-4887-87c4-3b1d54b23599" containerName="collect-profiles" Feb 19 11:15:21 crc kubenswrapper[4873]: E0219 11:15:21.828670 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828676 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828878 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa970ec-fd76-4bab-a561-14756fefbdd1" containerName="registry-server" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.828894 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="b012f5e4-512b-4887-87c4-3b1d54b23599" containerName="collect-profiles" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.830352 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.836552 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.892377 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.892421 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.892538 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwfms\" (UniqueName: \"kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.994836 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwfms\" (UniqueName: \"kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.994981 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.995001 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.995502 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:21 crc kubenswrapper[4873]: I0219 11:15:21.995561 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.028074 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwfms\" (UniqueName: \"kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms\") pod \"community-operators-w429f\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.198667 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.487509 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7t964_4a42b4a3-c207-40a8-80b9-0532a0ec2865/controller/0.log" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.526376 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7t964_4a42b4a3-c207-40a8-80b9-0532a0ec2865/kube-rbac-proxy/0.log" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.722976 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.789665 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.983867 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:15:22 crc kubenswrapper[4873]: I0219 11:15:22.992910 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.007200 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.035885 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.229560 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.229597 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.264009 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.300788 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.483236 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.484934 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.505216 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.542064 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/controller/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.664355 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/frr-metrics/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.697746 4873 generic.go:334] "Generic (PLEG): container finished" podID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerID="d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b" exitCode=0 Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.697795 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerDied","Data":"d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b"} Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.697825 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerStarted","Data":"53cf5f5bcf7ba2b2a7e15e41ecbaff43df691ee310c04d83ecb5ae5c7ae80c8f"} Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.747477 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/kube-rbac-proxy-frr/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.780331 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/kube-rbac-proxy/0.log" Feb 19 11:15:23 crc kubenswrapper[4873]: I0219 11:15:23.980110 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/reloader/0.log" Feb 19 11:15:24 crc kubenswrapper[4873]: I0219 11:15:24.068264 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-xwr52_8d8f9aee-601f-4530-876b-83709311196b/frr-k8s-webhook-server/0.log" Feb 19 11:15:24 crc kubenswrapper[4873]: I0219 11:15:24.243999 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6897955989-f6tl8_94f344cf-0f09-4812-ab40-dcce7f260a53/manager/0.log" Feb 19 11:15:24 crc kubenswrapper[4873]: I0219 11:15:24.557798 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bf7457c95-rq2ph_e9d29e18-f362-478f-911d-ed979e43aae1/webhook-server/0.log" Feb 19 11:15:24 crc kubenswrapper[4873]: I0219 11:15:24.666280 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-phsr6_46cac2a1-6c87-4c4e-a73f-92dbee290015/kube-rbac-proxy/0.log" Feb 19 11:15:24 crc kubenswrapper[4873]: I0219 11:15:24.710769 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerStarted","Data":"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219"} Feb 19 11:15:25 crc kubenswrapper[4873]: I0219 11:15:25.482758 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-phsr6_46cac2a1-6c87-4c4e-a73f-92dbee290015/speaker/0.log" Feb 19 11:15:25 crc kubenswrapper[4873]: I0219 11:15:25.534202 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/frr/0.log" Feb 19 11:15:26 crc kubenswrapper[4873]: I0219 11:15:26.727515 4873 generic.go:334] "Generic (PLEG): container finished" podID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerID="47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219" exitCode=0 Feb 19 11:15:26 crc kubenswrapper[4873]: I0219 11:15:26.727591 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerDied","Data":"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219"} Feb 19 11:15:27 crc kubenswrapper[4873]: I0219 11:15:27.740995 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerStarted","Data":"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15"} Feb 19 11:15:27 crc kubenswrapper[4873]: I0219 11:15:27.763596 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w429f" podStartSLOduration=3.255409032 podStartE2EDuration="6.763580988s" podCreationTimestamp="2026-02-19 11:15:21 +0000 UTC" firstStartedPulling="2026-02-19 11:15:23.701797019 +0000 UTC m=+5432.991228677" lastFinishedPulling="2026-02-19 11:15:27.209968995 +0000 UTC m=+5436.499400633" observedRunningTime="2026-02-19 11:15:27.761318501 +0000 UTC m=+5437.050750139" watchObservedRunningTime="2026-02-19 11:15:27.763580988 +0000 UTC m=+5437.053012626" Feb 19 11:15:32 crc kubenswrapper[4873]: I0219 11:15:32.199309 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:32 crc kubenswrapper[4873]: I0219 11:15:32.199962 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:32 crc kubenswrapper[4873]: I0219 11:15:32.249015 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:32 crc kubenswrapper[4873]: I0219 11:15:32.845257 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:32 crc kubenswrapper[4873]: I0219 11:15:32.897682 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:34 crc kubenswrapper[4873]: I0219 11:15:34.801461 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w429f" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="registry-server" containerID="cri-o://f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15" gracePeriod=2 Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.299408 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.468169 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content\") pod \"f5b5347f-ec96-4e47-a667-286f7e382b01\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.468258 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwfms\" (UniqueName: \"kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms\") pod \"f5b5347f-ec96-4e47-a667-286f7e382b01\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.468541 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities\") pod \"f5b5347f-ec96-4e47-a667-286f7e382b01\" (UID: \"f5b5347f-ec96-4e47-a667-286f7e382b01\") " Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.469692 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities" (OuterVolumeSpecName: "utilities") pod "f5b5347f-ec96-4e47-a667-286f7e382b01" (UID: "f5b5347f-ec96-4e47-a667-286f7e382b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.474308 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms" (OuterVolumeSpecName: "kube-api-access-xwfms") pod "f5b5347f-ec96-4e47-a667-286f7e382b01" (UID: "f5b5347f-ec96-4e47-a667-286f7e382b01"). InnerVolumeSpecName "kube-api-access-xwfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.572502 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.572554 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwfms\" (UniqueName: \"kubernetes.io/projected/f5b5347f-ec96-4e47-a667-286f7e382b01-kube-api-access-xwfms\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.813155 4873 generic.go:334] "Generic (PLEG): container finished" podID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerID="f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15" exitCode=0 Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.813240 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w429f" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.813262 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerDied","Data":"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15"} Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.813650 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w429f" event={"ID":"f5b5347f-ec96-4e47-a667-286f7e382b01","Type":"ContainerDied","Data":"53cf5f5bcf7ba2b2a7e15e41ecbaff43df691ee310c04d83ecb5ae5c7ae80c8f"} Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.813675 4873 scope.go:117] "RemoveContainer" containerID="f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.843797 4873 scope.go:117] "RemoveContainer" containerID="47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.869474 4873 scope.go:117] "RemoveContainer" containerID="d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.931764 4873 scope.go:117] "RemoveContainer" containerID="f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15" Feb 19 11:15:35 crc kubenswrapper[4873]: E0219 11:15:35.932196 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15\": container with ID starting with f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15 not found: ID does not exist" containerID="f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.932232 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15"} err="failed to get container status \"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15\": rpc error: code = NotFound desc = could not find container \"f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15\": container with ID starting with f8e747bd988653ca7f19917ec2b99fc1db607d63f2ebbd9eed9a7cbaa27a3c15 not found: ID does not exist" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.932258 4873 scope.go:117] "RemoveContainer" containerID="47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219" Feb 19 11:15:35 crc kubenswrapper[4873]: E0219 11:15:35.932511 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219\": container with ID starting with 47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219 not found: ID does not exist" containerID="47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.932538 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219"} err="failed to get container status \"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219\": rpc error: code = NotFound desc = could not find container \"47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219\": container with ID starting with 47f4cc4a1e4fd76aa97fcb23bb085ce2414f8284a04c68faa6b249add2e91219 not found: ID does not exist" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.932584 4873 scope.go:117] "RemoveContainer" containerID="d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b" Feb 19 11:15:35 crc kubenswrapper[4873]: E0219 11:15:35.932828 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b\": container with ID starting with d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b not found: ID does not exist" containerID="d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b" Feb 19 11:15:35 crc kubenswrapper[4873]: I0219 11:15:35.932855 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b"} err="failed to get container status \"d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b\": rpc error: code = NotFound desc = could not find container \"d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b\": container with ID starting with d76c5be7fa7447fdf794eef4c34831d7fd615bae29cb424d67a2de54bab7268b not found: ID does not exist" Feb 19 11:15:36 crc kubenswrapper[4873]: I0219 11:15:36.189322 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5b5347f-ec96-4e47-a667-286f7e382b01" (UID: "f5b5347f-ec96-4e47-a667-286f7e382b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:15:36 crc kubenswrapper[4873]: I0219 11:15:36.285520 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b5347f-ec96-4e47-a667-286f7e382b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:36 crc kubenswrapper[4873]: I0219 11:15:36.456165 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:36 crc kubenswrapper[4873]: I0219 11:15:36.469499 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w429f"] Feb 19 11:15:37 crc kubenswrapper[4873]: I0219 11:15:37.495316 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" path="/var/lib/kubelet/pods/f5b5347f-ec96-4e47-a667-286f7e382b01/volumes" Feb 19 11:15:37 crc kubenswrapper[4873]: I0219 11:15:37.889942 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.178858 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.180927 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.210652 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.365023 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.383526 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/extract/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.450419 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.569092 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.724172 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.735933 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.742228 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.928749 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/extract/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.950487 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:15:38 crc kubenswrapper[4873]: I0219 11:15:38.971179 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.100826 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.264627 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.300674 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.314609 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.467646 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.521354 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.751680 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.937655 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:15:39 crc kubenswrapper[4873]: I0219 11:15:39.954281 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/registry-server/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.028496 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.031307 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.179696 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.185424 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.447188 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.611068 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.690628 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.701840 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:15:40 crc kubenswrapper[4873]: I0219 11:15:40.919957 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.029843 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/extract/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.050965 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.078667 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/registry-server/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.238064 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jt9rj_1d58439b-31c6-44df-a32d-48f0fcb6a361/marketplace-operator/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.285236 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.466158 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.480360 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.500517 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.674380 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.699573 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.864586 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/registry-server/0.log" Feb 19 11:15:41 crc kubenswrapper[4873]: I0219 11:15:41.921299 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.064075 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.067475 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.116413 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.240302 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.262265 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:15:42 crc kubenswrapper[4873]: I0219 11:15:42.915810 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/registry-server/0.log" Feb 19 11:15:55 crc kubenswrapper[4873]: I0219 11:15:55.869503 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7_3180318c-7d9a-454b-8de4-887fabae362b/prometheus-operator-admission-webhook/0.log" Feb 19 11:15:55 crc kubenswrapper[4873]: I0219 11:15:55.870410 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-qptdb_4724c979-0040-4017-86ce-78d2a8bdb44e/prometheus-operator-admission-webhook/0.log" Feb 19 11:15:55 crc kubenswrapper[4873]: I0219 11:15:55.903790 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7nww_5d79d4d8-e595-4aec-bc0b-7347b826c257/prometheus-operator/0.log" Feb 19 11:15:56 crc kubenswrapper[4873]: I0219 11:15:56.096227 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7wtlv_b23281d2-935e-47c1-bc83-8d00c7649625/operator/0.log" Feb 19 11:15:56 crc kubenswrapper[4873]: I0219 11:15:56.107305 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8sflg_ea1cc2c7-c932-4b3d-b718-d017eb06163f/perses-operator/0.log" Feb 19 11:16:03 crc kubenswrapper[4873]: I0219 11:16:03.701412 4873 scope.go:117] "RemoveContainer" containerID="97237f992c0b70ff79ac5f913c59bb566d1f47e053b9c438613bd77bb3e8a5fe" Feb 19 11:16:18 crc kubenswrapper[4873]: I0219 11:16:18.240944 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:16:18 crc kubenswrapper[4873]: I0219 11:16:18.242313 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:16:48 crc kubenswrapper[4873]: I0219 11:16:48.240927 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:16:48 crc kubenswrapper[4873]: I0219 11:16:48.241542 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:17:18 crc kubenswrapper[4873]: I0219 11:17:18.240782 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:17:18 crc kubenswrapper[4873]: I0219 11:17:18.241368 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:17:18 crc kubenswrapper[4873]: I0219 11:17:18.241420 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:17:18 crc kubenswrapper[4873]: I0219 11:17:18.242242 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:17:18 crc kubenswrapper[4873]: I0219 11:17:18.242293 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543" gracePeriod=600 Feb 19 11:17:19 crc kubenswrapper[4873]: I0219 11:17:19.058297 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543" exitCode=0 Feb 19 11:17:19 crc kubenswrapper[4873]: I0219 11:17:19.058418 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543"} Feb 19 11:17:19 crc kubenswrapper[4873]: I0219 11:17:19.058994 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838"} Feb 19 11:17:19 crc kubenswrapper[4873]: I0219 11:17:19.059044 4873 scope.go:117] "RemoveContainer" containerID="1c6d3e122fcbfb2819864013ec95d084efa34729bd03b5b61b19e8b2dae70bd7" Feb 19 11:18:03 crc kubenswrapper[4873]: I0219 11:18:03.836555 4873 scope.go:117] "RemoveContainer" containerID="e1e47bb1ff8a672d572432525165620c6fe8cdbb5878839e48d767115053def9" Feb 19 11:18:07 crc kubenswrapper[4873]: I0219 11:18:07.563529 4873 generic.go:334] "Generic (PLEG): container finished" podID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerID="ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7" exitCode=0 Feb 19 11:18:07 crc kubenswrapper[4873]: I0219 11:18:07.563601 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lms9s/must-gather-lgwst" event={"ID":"a6f4f1cb-6b34-4940-be18-6ba992fd72d7","Type":"ContainerDied","Data":"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7"} Feb 19 11:18:07 crc kubenswrapper[4873]: I0219 11:18:07.564550 4873 scope.go:117] "RemoveContainer" containerID="ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7" Feb 19 11:18:07 crc kubenswrapper[4873]: I0219 11:18:07.921502 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lms9s_must-gather-lgwst_a6f4f1cb-6b34-4940-be18-6ba992fd72d7/gather/0.log" Feb 19 11:18:10 crc kubenswrapper[4873]: E0219 11:18:10.541147 4873 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.156:45014->38.102.83.156:45689: write tcp 38.102.83.156:45014->38.102.83.156:45689: write: broken pipe Feb 19 11:18:16 crc kubenswrapper[4873]: I0219 11:18:16.770156 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lms9s/must-gather-lgwst"] Feb 19 11:18:16 crc kubenswrapper[4873]: I0219 11:18:16.771043 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lms9s/must-gather-lgwst" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="copy" containerID="cri-o://d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620" gracePeriod=2 Feb 19 11:18:16 crc kubenswrapper[4873]: I0219 11:18:16.780733 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lms9s/must-gather-lgwst"] Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.225589 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lms9s_must-gather-lgwst_a6f4f1cb-6b34-4940-be18-6ba992fd72d7/copy/0.log" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.226324 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.368888 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output\") pod \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.368979 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9b49\" (UniqueName: \"kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49\") pod \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\" (UID: \"a6f4f1cb-6b34-4940-be18-6ba992fd72d7\") " Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.375644 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49" (OuterVolumeSpecName: "kube-api-access-n9b49") pod "a6f4f1cb-6b34-4940-be18-6ba992fd72d7" (UID: "a6f4f1cb-6b34-4940-be18-6ba992fd72d7"). InnerVolumeSpecName "kube-api-access-n9b49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.472705 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9b49\" (UniqueName: \"kubernetes.io/projected/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-kube-api-access-n9b49\") on node \"crc\" DevicePath \"\"" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.581764 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a6f4f1cb-6b34-4940-be18-6ba992fd72d7" (UID: "a6f4f1cb-6b34-4940-be18-6ba992fd72d7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.676720 4873 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6f4f1cb-6b34-4940-be18-6ba992fd72d7-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.683056 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lms9s_must-gather-lgwst_a6f4f1cb-6b34-4940-be18-6ba992fd72d7/copy/0.log" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.684561 4873 generic.go:334] "Generic (PLEG): container finished" podID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerID="d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620" exitCode=143 Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.684623 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lms9s/must-gather-lgwst" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.684647 4873 scope.go:117] "RemoveContainer" containerID="d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.725801 4873 scope.go:117] "RemoveContainer" containerID="ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.768641 4873 scope.go:117] "RemoveContainer" containerID="d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620" Feb 19 11:18:17 crc kubenswrapper[4873]: E0219 11:18:17.770811 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620\": container with ID starting with d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620 not found: ID does not exist" containerID="d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.770877 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620"} err="failed to get container status \"d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620\": rpc error: code = NotFound desc = could not find container \"d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620\": container with ID starting with d21919709c4c5fc4cd8c1920eab8a780a95e4fd90dfbc7716f574f51697d2620 not found: ID does not exist" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.770912 4873 scope.go:117] "RemoveContainer" containerID="ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7" Feb 19 11:18:17 crc kubenswrapper[4873]: E0219 11:18:17.773561 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7\": container with ID starting with ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7 not found: ID does not exist" containerID="ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7" Feb 19 11:18:17 crc kubenswrapper[4873]: I0219 11:18:17.773652 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7"} err="failed to get container status \"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7\": rpc error: code = NotFound desc = could not find container \"ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7\": container with ID starting with ec0556d441d0438d8f010526f2d7e9ab2d6477bf9ac4834a65568abbbc6e94c7 not found: ID does not exist" Feb 19 11:18:19 crc kubenswrapper[4873]: I0219 11:18:19.496450 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" path="/var/lib/kubelet/pods/a6f4f1cb-6b34-4940-be18-6ba992fd72d7/volumes" Feb 19 11:19:03 crc kubenswrapper[4873]: I0219 11:19:03.906414 4873 scope.go:117] "RemoveContainer" containerID="9881fb46a0d2568f8dddfc8da179e6f97035508c7f4131e1a48dad7a6c2fd139" Feb 19 11:19:18 crc kubenswrapper[4873]: I0219 11:19:18.240518 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:19:18 crc kubenswrapper[4873]: I0219 11:19:18.241059 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:19:48 crc kubenswrapper[4873]: I0219 11:19:48.241068 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:19:48 crc kubenswrapper[4873]: I0219 11:19:48.241732 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.240935 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.241530 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.241578 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.242337 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.242388 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" gracePeriod=600 Feb 19 11:20:18 crc kubenswrapper[4873]: E0219 11:20:18.366260 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.927202 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" exitCode=0 Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.927287 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838"} Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.927591 4873 scope.go:117] "RemoveContainer" containerID="b1849d703253da95bfe5a3436c40938b54212f4d26fc335188390020db850543" Feb 19 11:20:18 crc kubenswrapper[4873]: I0219 11:20:18.928398 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:20:18 crc kubenswrapper[4873]: E0219 11:20:18.928727 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.720644 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:21 crc kubenswrapper[4873]: E0219 11:20:21.721983 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="gather" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722007 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="gather" Feb 19 11:20:21 crc kubenswrapper[4873]: E0219 11:20:21.722034 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="extract-utilities" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722046 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="extract-utilities" Feb 19 11:20:21 crc kubenswrapper[4873]: E0219 11:20:21.722075 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="registry-server" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722087 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="registry-server" Feb 19 11:20:21 crc kubenswrapper[4873]: E0219 11:20:21.722137 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="copy" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722148 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="copy" Feb 19 11:20:21 crc kubenswrapper[4873]: E0219 11:20:21.722191 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="extract-content" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722203 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="extract-content" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722650 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="gather" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722673 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f4f1cb-6b34-4940-be18-6ba992fd72d7" containerName="copy" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.722689 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b5347f-ec96-4e47-a667-286f7e382b01" containerName="registry-server" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.725156 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.731339 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.864118 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nqw\" (UniqueName: \"kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.864162 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.864496 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.966996 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.967153 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nqw\" (UniqueName: \"kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.967173 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.967609 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:21 crc kubenswrapper[4873]: I0219 11:20:21.967819 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.014410 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nqw\" (UniqueName: \"kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw\") pod \"certified-operators-2mprj\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.058741 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.617003 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.975473 4873 generic.go:334] "Generic (PLEG): container finished" podID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerID="88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84" exitCode=0 Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.975520 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerDied","Data":"88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84"} Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.975563 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerStarted","Data":"037a9073984a8a4475faa5e356612016fb33bcecce2615925d804f9f77b81660"} Feb 19 11:20:22 crc kubenswrapper[4873]: I0219 11:20:22.979494 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:20:24 crc kubenswrapper[4873]: I0219 11:20:24.995677 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerStarted","Data":"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b"} Feb 19 11:20:26 crc kubenswrapper[4873]: I0219 11:20:26.007490 4873 generic.go:334] "Generic (PLEG): container finished" podID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerID="0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b" exitCode=0 Feb 19 11:20:26 crc kubenswrapper[4873]: I0219 11:20:26.007597 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerDied","Data":"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b"} Feb 19 11:20:27 crc kubenswrapper[4873]: I0219 11:20:27.036421 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerStarted","Data":"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51"} Feb 19 11:20:27 crc kubenswrapper[4873]: I0219 11:20:27.077749 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2mprj" podStartSLOduration=2.6368362579999998 podStartE2EDuration="6.077722971s" podCreationTimestamp="2026-02-19 11:20:21 +0000 UTC" firstStartedPulling="2026-02-19 11:20:22.979185138 +0000 UTC m=+5732.268616786" lastFinishedPulling="2026-02-19 11:20:26.420071861 +0000 UTC m=+5735.709503499" observedRunningTime="2026-02-19 11:20:27.067503054 +0000 UTC m=+5736.356934692" watchObservedRunningTime="2026-02-19 11:20:27.077722971 +0000 UTC m=+5736.367154609" Feb 19 11:20:31 crc kubenswrapper[4873]: I0219 11:20:31.492961 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:20:31 crc kubenswrapper[4873]: E0219 11:20:31.494021 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:20:32 crc kubenswrapper[4873]: I0219 11:20:32.059322 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:32 crc kubenswrapper[4873]: I0219 11:20:32.059369 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:32 crc kubenswrapper[4873]: I0219 11:20:32.140513 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:32 crc kubenswrapper[4873]: I0219 11:20:32.226381 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:32 crc kubenswrapper[4873]: I0219 11:20:32.408891 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.105549 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2mprj" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="registry-server" containerID="cri-o://4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51" gracePeriod=2 Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.569169 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.653663 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content\") pod \"a751696c-f9c4-4ab3-aba8-95342fed53a4\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.653870 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities\") pod \"a751696c-f9c4-4ab3-aba8-95342fed53a4\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.653900 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9nqw\" (UniqueName: \"kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw\") pod \"a751696c-f9c4-4ab3-aba8-95342fed53a4\" (UID: \"a751696c-f9c4-4ab3-aba8-95342fed53a4\") " Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.657888 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities" (OuterVolumeSpecName: "utilities") pod "a751696c-f9c4-4ab3-aba8-95342fed53a4" (UID: "a751696c-f9c4-4ab3-aba8-95342fed53a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.661095 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw" (OuterVolumeSpecName: "kube-api-access-j9nqw") pod "a751696c-f9c4-4ab3-aba8-95342fed53a4" (UID: "a751696c-f9c4-4ab3-aba8-95342fed53a4"). InnerVolumeSpecName "kube-api-access-j9nqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.757020 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.757050 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9nqw\" (UniqueName: \"kubernetes.io/projected/a751696c-f9c4-4ab3-aba8-95342fed53a4-kube-api-access-j9nqw\") on node \"crc\" DevicePath \"\"" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.824008 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a751696c-f9c4-4ab3-aba8-95342fed53a4" (UID: "a751696c-f9c4-4ab3-aba8-95342fed53a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:20:34 crc kubenswrapper[4873]: I0219 11:20:34.858707 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a751696c-f9c4-4ab3-aba8-95342fed53a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.115508 4873 generic.go:334] "Generic (PLEG): container finished" podID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerID="4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51" exitCode=0 Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.115566 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerDied","Data":"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51"} Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.115598 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mprj" event={"ID":"a751696c-f9c4-4ab3-aba8-95342fed53a4","Type":"ContainerDied","Data":"037a9073984a8a4475faa5e356612016fb33bcecce2615925d804f9f77b81660"} Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.115616 4873 scope.go:117] "RemoveContainer" containerID="4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.115795 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mprj" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.142577 4873 scope.go:117] "RemoveContainer" containerID="0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.153463 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.162216 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2mprj"] Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.165689 4873 scope.go:117] "RemoveContainer" containerID="88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.214087 4873 scope.go:117] "RemoveContainer" containerID="4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51" Feb 19 11:20:35 crc kubenswrapper[4873]: E0219 11:20:35.214541 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51\": container with ID starting with 4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51 not found: ID does not exist" containerID="4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.214598 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51"} err="failed to get container status \"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51\": rpc error: code = NotFound desc = could not find container \"4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51\": container with ID starting with 4025b1756223afbc418f2a5c83f4a05f80b7c4b8565270105b193a686fe90f51 not found: ID does not exist" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.214643 4873 scope.go:117] "RemoveContainer" containerID="0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b" Feb 19 11:20:35 crc kubenswrapper[4873]: E0219 11:20:35.214992 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b\": container with ID starting with 0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b not found: ID does not exist" containerID="0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.215032 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b"} err="failed to get container status \"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b\": rpc error: code = NotFound desc = could not find container \"0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b\": container with ID starting with 0373cd844ad41c1f4e277a296c09e9b2f4f251b01a9feaa92bc83413e087803b not found: ID does not exist" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.215057 4873 scope.go:117] "RemoveContainer" containerID="88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84" Feb 19 11:20:35 crc kubenswrapper[4873]: E0219 11:20:35.215373 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84\": container with ID starting with 88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84 not found: ID does not exist" containerID="88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.215408 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84"} err="failed to get container status \"88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84\": rpc error: code = NotFound desc = could not find container \"88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84\": container with ID starting with 88112815c5110d49bf37ce2a6f63b237b5591a8d87cfd8400a25ea8171753c84 not found: ID does not exist" Feb 19 11:20:35 crc kubenswrapper[4873]: I0219 11:20:35.501980 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" path="/var/lib/kubelet/pods/a751696c-f9c4-4ab3-aba8-95342fed53a4/volumes" Feb 19 11:20:44 crc kubenswrapper[4873]: I0219 11:20:44.484403 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:20:44 crc kubenswrapper[4873]: E0219 11:20:44.485070 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:20:57 crc kubenswrapper[4873]: I0219 11:20:57.484065 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:20:57 crc kubenswrapper[4873]: E0219 11:20:57.485010 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:21:04 crc kubenswrapper[4873]: I0219 11:21:04.016840 4873 scope.go:117] "RemoveContainer" containerID="b30980f32b2645748e6c10b7a5e9191a7ed287e91ec99d6202bbad12353be541" Feb 19 11:21:04 crc kubenswrapper[4873]: I0219 11:21:04.048277 4873 scope.go:117] "RemoveContainer" containerID="97f52c7b6a3d35760dce1e3eea42c096bc9ea72133b589b617ac83afe8756696" Feb 19 11:21:04 crc kubenswrapper[4873]: I0219 11:21:04.092785 4873 scope.go:117] "RemoveContainer" containerID="981c3b34f5f254eaaf4de449435114b331f5b9ed7452571e8eef21054d499734" Feb 19 11:21:09 crc kubenswrapper[4873]: I0219 11:21:09.485250 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:21:09 crc kubenswrapper[4873]: E0219 11:21:09.485972 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:21:23 crc kubenswrapper[4873]: I0219 11:21:23.484439 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:21:23 crc kubenswrapper[4873]: E0219 11:21:23.485276 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:21:34 crc kubenswrapper[4873]: I0219 11:21:34.484430 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:21:34 crc kubenswrapper[4873]: E0219 11:21:34.486468 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:21:48 crc kubenswrapper[4873]: I0219 11:21:48.483989 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:21:48 crc kubenswrapper[4873]: E0219 11:21:48.484843 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.141449 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhpxg/must-gather-vhpx8"] Feb 19 11:21:57 crc kubenswrapper[4873]: E0219 11:21:57.143686 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="registry-server" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.143830 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="registry-server" Feb 19 11:21:57 crc kubenswrapper[4873]: E0219 11:21:57.143931 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="extract-content" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.144013 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="extract-content" Feb 19 11:21:57 crc kubenswrapper[4873]: E0219 11:21:57.144116 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="extract-utilities" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.144199 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="extract-utilities" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.144547 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="a751696c-f9c4-4ab3-aba8-95342fed53a4" containerName="registry-server" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.146894 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.159863 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhpxg"/"openshift-service-ca.crt" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.160205 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhpxg"/"kube-root-ca.crt" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.171421 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhpxg/must-gather-vhpx8"] Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.194490 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.194577 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfvlt\" (UniqueName: \"kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.296227 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.296314 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfvlt\" (UniqueName: \"kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.296951 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.322385 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfvlt\" (UniqueName: \"kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt\") pod \"must-gather-vhpx8\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.485174 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:21:57 crc kubenswrapper[4873]: I0219 11:21:57.960670 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhpxg/must-gather-vhpx8"] Feb 19 11:21:58 crc kubenswrapper[4873]: I0219 11:21:58.599845 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" event={"ID":"93cc0682-3903-4dad-a4a1-3e807492bab4","Type":"ContainerStarted","Data":"767ace8d23069d52fe292289a53031e59a4a02afb307e1f96188d5747c12d9df"} Feb 19 11:21:58 crc kubenswrapper[4873]: I0219 11:21:58.600355 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" event={"ID":"93cc0682-3903-4dad-a4a1-3e807492bab4","Type":"ContainerStarted","Data":"ff3687756e2207c400bf2bbbc9410e3f3ee429ff1507157450e53ef945e6af06"} Feb 19 11:21:58 crc kubenswrapper[4873]: I0219 11:21:58.600370 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" event={"ID":"93cc0682-3903-4dad-a4a1-3e807492bab4","Type":"ContainerStarted","Data":"889f3805eaa8890505b91002548c1d0a4c1f2dd9fd0077ad36b03032e4ca0a94"} Feb 19 11:21:58 crc kubenswrapper[4873]: I0219 11:21:58.621349 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" podStartSLOduration=1.6213263580000001 podStartE2EDuration="1.621326358s" podCreationTimestamp="2026-02-19 11:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:21:58.618744123 +0000 UTC m=+5827.908175761" watchObservedRunningTime="2026-02-19 11:21:58.621326358 +0000 UTC m=+5827.910758006" Feb 19 11:22:00 crc kubenswrapper[4873]: I0219 11:22:00.484562 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:22:00 crc kubenswrapper[4873]: E0219 11:22:00.485395 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.124637 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-w28cl"] Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.127692 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.130045 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhpxg"/"default-dockercfg-v9qxb" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.204232 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.204299 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvb8w\" (UniqueName: \"kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.306372 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.306428 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvb8w\" (UniqueName: \"kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.306998 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.344215 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvb8w\" (UniqueName: \"kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w\") pod \"crc-debug-w28cl\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.449250 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:02 crc kubenswrapper[4873]: W0219 11:22:02.475078 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb42ac03_78d0_4edc_bfd9_c248a7970fa5.slice/crio-3c959f9ef4fd551afacdabb17d9558a5c14d2556363a624e3f2ba4b934e76c0e WatchSource:0}: Error finding container 3c959f9ef4fd551afacdabb17d9558a5c14d2556363a624e3f2ba4b934e76c0e: Status 404 returned error can't find the container with id 3c959f9ef4fd551afacdabb17d9558a5c14d2556363a624e3f2ba4b934e76c0e Feb 19 11:22:02 crc kubenswrapper[4873]: I0219 11:22:02.637077 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" event={"ID":"bb42ac03-78d0-4edc-bfd9-c248a7970fa5","Type":"ContainerStarted","Data":"3c959f9ef4fd551afacdabb17d9558a5c14d2556363a624e3f2ba4b934e76c0e"} Feb 19 11:22:03 crc kubenswrapper[4873]: I0219 11:22:03.647003 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" event={"ID":"bb42ac03-78d0-4edc-bfd9-c248a7970fa5","Type":"ContainerStarted","Data":"d9b27392492b242bb0b7cf96897d9ede2e48fff749603f8472748bdf9b9a9549"} Feb 19 11:22:03 crc kubenswrapper[4873]: I0219 11:22:03.676378 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" podStartSLOduration=1.6763591039999999 podStartE2EDuration="1.676359104s" podCreationTimestamp="2026-02-19 11:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:22:03.666600029 +0000 UTC m=+5832.956031677" watchObservedRunningTime="2026-02-19 11:22:03.676359104 +0000 UTC m=+5832.965790742" Feb 19 11:22:15 crc kubenswrapper[4873]: I0219 11:22:15.484439 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:22:15 crc kubenswrapper[4873]: E0219 11:22:15.485256 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:22:28 crc kubenswrapper[4873]: I0219 11:22:28.484293 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:22:28 crc kubenswrapper[4873]: E0219 11:22:28.485124 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:22:40 crc kubenswrapper[4873]: I0219 11:22:40.484018 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:22:40 crc kubenswrapper[4873]: E0219 11:22:40.484751 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:22:43 crc kubenswrapper[4873]: I0219 11:22:43.992228 4873 generic.go:334] "Generic (PLEG): container finished" podID="bb42ac03-78d0-4edc-bfd9-c248a7970fa5" containerID="d9b27392492b242bb0b7cf96897d9ede2e48fff749603f8472748bdf9b9a9549" exitCode=0 Feb 19 11:22:43 crc kubenswrapper[4873]: I0219 11:22:43.992336 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" event={"ID":"bb42ac03-78d0-4edc-bfd9-c248a7970fa5","Type":"ContainerDied","Data":"d9b27392492b242bb0b7cf96897d9ede2e48fff749603f8472748bdf9b9a9549"} Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.122234 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.157494 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-w28cl"] Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.167839 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-w28cl"] Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.197612 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvb8w\" (UniqueName: \"kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w\") pod \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.197692 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host\") pod \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\" (UID: \"bb42ac03-78d0-4edc-bfd9-c248a7970fa5\") " Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.197781 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host" (OuterVolumeSpecName: "host") pod "bb42ac03-78d0-4edc-bfd9-c248a7970fa5" (UID: "bb42ac03-78d0-4edc-bfd9-c248a7970fa5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.199723 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.217485 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w" (OuterVolumeSpecName: "kube-api-access-rvb8w") pod "bb42ac03-78d0-4edc-bfd9-c248a7970fa5" (UID: "bb42ac03-78d0-4edc-bfd9-c248a7970fa5"). InnerVolumeSpecName "kube-api-access-rvb8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.302299 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvb8w\" (UniqueName: \"kubernetes.io/projected/bb42ac03-78d0-4edc-bfd9-c248a7970fa5-kube-api-access-rvb8w\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:45 crc kubenswrapper[4873]: I0219 11:22:45.494756 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb42ac03-78d0-4edc-bfd9-c248a7970fa5" path="/var/lib/kubelet/pods/bb42ac03-78d0-4edc-bfd9-c248a7970fa5/volumes" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.013619 4873 scope.go:117] "RemoveContainer" containerID="d9b27392492b242bb0b7cf96897d9ede2e48fff749603f8472748bdf9b9a9549" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.013688 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-w28cl" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.406291 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-bpp7f"] Feb 19 11:22:46 crc kubenswrapper[4873]: E0219 11:22:46.411862 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb42ac03-78d0-4edc-bfd9-c248a7970fa5" containerName="container-00" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.412128 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb42ac03-78d0-4edc-bfd9-c248a7970fa5" containerName="container-00" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.412536 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb42ac03-78d0-4edc-bfd9-c248a7970fa5" containerName="container-00" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.413451 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.415797 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhpxg"/"default-dockercfg-v9qxb" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.525130 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.525296 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjps\" (UniqueName: \"kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.626549 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.626692 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwjps\" (UniqueName: \"kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.626713 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.649941 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwjps\" (UniqueName: \"kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps\") pod \"crc-debug-bpp7f\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:46 crc kubenswrapper[4873]: I0219 11:22:46.731003 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:47 crc kubenswrapper[4873]: I0219 11:22:47.023892 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" event={"ID":"55a0c77b-8e5a-4b18-8361-b672a9d394fb","Type":"ContainerStarted","Data":"c1a3ce9c18b56e83a8ab74c08f3f38d867ea9e51b90994bc784d9d1d2d171d1d"} Feb 19 11:22:47 crc kubenswrapper[4873]: I0219 11:22:47.024328 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" event={"ID":"55a0c77b-8e5a-4b18-8361-b672a9d394fb","Type":"ContainerStarted","Data":"9f80af5e3d39e35b511c4a0fe0f40231a807314542902ce320d2460730d20861"} Feb 19 11:22:48 crc kubenswrapper[4873]: I0219 11:22:48.036450 4873 generic.go:334] "Generic (PLEG): container finished" podID="55a0c77b-8e5a-4b18-8361-b672a9d394fb" containerID="c1a3ce9c18b56e83a8ab74c08f3f38d867ea9e51b90994bc784d9d1d2d171d1d" exitCode=0 Feb 19 11:22:48 crc kubenswrapper[4873]: I0219 11:22:48.036540 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" event={"ID":"55a0c77b-8e5a-4b18-8361-b672a9d394fb","Type":"ContainerDied","Data":"c1a3ce9c18b56e83a8ab74c08f3f38d867ea9e51b90994bc784d9d1d2d171d1d"} Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.166944 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.279277 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjps\" (UniqueName: \"kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps\") pod \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.279455 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host\") pod \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\" (UID: \"55a0c77b-8e5a-4b18-8361-b672a9d394fb\") " Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.279509 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host" (OuterVolumeSpecName: "host") pod "55a0c77b-8e5a-4b18-8361-b672a9d394fb" (UID: "55a0c77b-8e5a-4b18-8361-b672a9d394fb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.280004 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55a0c77b-8e5a-4b18-8361-b672a9d394fb-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.285918 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps" (OuterVolumeSpecName: "kube-api-access-lwjps") pod "55a0c77b-8e5a-4b18-8361-b672a9d394fb" (UID: "55a0c77b-8e5a-4b18-8361-b672a9d394fb"). InnerVolumeSpecName "kube-api-access-lwjps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.382016 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwjps\" (UniqueName: \"kubernetes.io/projected/55a0c77b-8e5a-4b18-8361-b672a9d394fb-kube-api-access-lwjps\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.408894 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-bpp7f"] Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.417748 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-bpp7f"] Feb 19 11:22:49 crc kubenswrapper[4873]: I0219 11:22:49.494390 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a0c77b-8e5a-4b18-8361-b672a9d394fb" path="/var/lib/kubelet/pods/55a0c77b-8e5a-4b18-8361-b672a9d394fb/volumes" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.055285 4873 scope.go:117] "RemoveContainer" containerID="c1a3ce9c18b56e83a8ab74c08f3f38d867ea9e51b90994bc784d9d1d2d171d1d" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.055331 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-bpp7f" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.637381 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-dbqlz"] Feb 19 11:22:50 crc kubenswrapper[4873]: E0219 11:22:50.638082 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a0c77b-8e5a-4b18-8361-b672a9d394fb" containerName="container-00" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.638098 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a0c77b-8e5a-4b18-8361-b672a9d394fb" containerName="container-00" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.638346 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a0c77b-8e5a-4b18-8361-b672a9d394fb" containerName="container-00" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.639199 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.642469 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhpxg"/"default-dockercfg-v9qxb" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.706917 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8f4j\" (UniqueName: \"kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.707003 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.809214 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8f4j\" (UniqueName: \"kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.809304 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.809449 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.842175 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8f4j\" (UniqueName: \"kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j\") pod \"crc-debug-dbqlz\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:50 crc kubenswrapper[4873]: I0219 11:22:50.964994 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:51 crc kubenswrapper[4873]: I0219 11:22:51.070192 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" event={"ID":"dcd95282-b63c-48c7-beaa-96e7112a6bd1","Type":"ContainerStarted","Data":"dca455c21375b3dc46bc6a6702c107398fe7c7d9aada73a2a3406f6ae430a774"} Feb 19 11:22:52 crc kubenswrapper[4873]: I0219 11:22:52.081254 4873 generic.go:334] "Generic (PLEG): container finished" podID="dcd95282-b63c-48c7-beaa-96e7112a6bd1" containerID="44dd1810b190272782d7c29dbaf6a016c2c0e18db1ac287ddff7ea30dae394ce" exitCode=0 Feb 19 11:22:52 crc kubenswrapper[4873]: I0219 11:22:52.081377 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" event={"ID":"dcd95282-b63c-48c7-beaa-96e7112a6bd1","Type":"ContainerDied","Data":"44dd1810b190272782d7c29dbaf6a016c2c0e18db1ac287ddff7ea30dae394ce"} Feb 19 11:22:52 crc kubenswrapper[4873]: I0219 11:22:52.126017 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-dbqlz"] Feb 19 11:22:52 crc kubenswrapper[4873]: I0219 11:22:52.135344 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhpxg/crc-debug-dbqlz"] Feb 19 11:22:52 crc kubenswrapper[4873]: I0219 11:22:52.484168 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:22:52 crc kubenswrapper[4873]: E0219 11:22:52.484470 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.222811 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.267075 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8f4j\" (UniqueName: \"kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j\") pod \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.267259 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host\") pod \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\" (UID: \"dcd95282-b63c-48c7-beaa-96e7112a6bd1\") " Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.267390 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host" (OuterVolumeSpecName: "host") pod "dcd95282-b63c-48c7-beaa-96e7112a6bd1" (UID: "dcd95282-b63c-48c7-beaa-96e7112a6bd1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.267808 4873 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcd95282-b63c-48c7-beaa-96e7112a6bd1-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.286231 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j" (OuterVolumeSpecName: "kube-api-access-v8f4j") pod "dcd95282-b63c-48c7-beaa-96e7112a6bd1" (UID: "dcd95282-b63c-48c7-beaa-96e7112a6bd1"). InnerVolumeSpecName "kube-api-access-v8f4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.369783 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8f4j\" (UniqueName: \"kubernetes.io/projected/dcd95282-b63c-48c7-beaa-96e7112a6bd1-kube-api-access-v8f4j\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:53 crc kubenswrapper[4873]: I0219 11:22:53.497495 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd95282-b63c-48c7-beaa-96e7112a6bd1" path="/var/lib/kubelet/pods/dcd95282-b63c-48c7-beaa-96e7112a6bd1/volumes" Feb 19 11:22:54 crc kubenswrapper[4873]: I0219 11:22:54.101457 4873 scope.go:117] "RemoveContainer" containerID="44dd1810b190272782d7c29dbaf6a016c2c0e18db1ac287ddff7ea30dae394ce" Feb 19 11:22:54 crc kubenswrapper[4873]: I0219 11:22:54.101513 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/crc-debug-dbqlz" Feb 19 11:23:05 crc kubenswrapper[4873]: I0219 11:23:05.484047 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:23:05 crc kubenswrapper[4873]: E0219 11:23:05.484853 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:23:18 crc kubenswrapper[4873]: I0219 11:23:18.484129 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:23:18 crc kubenswrapper[4873]: E0219 11:23:18.484918 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:23:31 crc kubenswrapper[4873]: I0219 11:23:31.492639 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:23:31 crc kubenswrapper[4873]: E0219 11:23:31.493446 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.272369 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c4d59d6dd-4nh9w_76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3/barbican-api/0.log" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.427912 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c4d59d6dd-4nh9w_76cdfe98-7182-4ed8-8d4a-7472ed0dc7c3/barbican-api-log/0.log" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.473396 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-667444df98-tdgw9_9be5e1ee-a214-46ca-a5bf-d1d337848085/barbican-keystone-listener/0.log" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.556436 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-667444df98-tdgw9_9be5e1ee-a214-46ca-a5bf-d1d337848085/barbican-keystone-listener-log/0.log" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.653973 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-596d5556df-fx4q8_fc48b70c-5ab9-4765-a8cd-5985a3f63854/barbican-worker/0.log" Feb 19 11:23:38 crc kubenswrapper[4873]: I0219 11:23:38.988585 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-596d5556df-fx4q8_fc48b70c-5ab9-4765-a8cd-5985a3f63854/barbican-worker-log/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.239662 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-j9s9r_fb8aa6eb-a92d-47ab-803f-664399242dde/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.318969 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/ceilometer-notification-agent/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.329908 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/ceilometer-central-agent/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.478372 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/proxy-httpd/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.488090 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e432fa6f-daf1-4f3a-9f84-ac9495956013/sg-core/0.log" Feb 19 11:23:39 crc kubenswrapper[4873]: I0219 11:23:39.805385 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f3dabe51-c676-42bb-936a-d784ee2e565a/cinder-api-log/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.093566 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_312e766d-4086-4bab-bf8f-9a154f1da5b5/probe/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.131333 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_312e766d-4086-4bab-bf8f-9a154f1da5b5/cinder-backup/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.322905 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f3dabe51-c676-42bb-936a-d784ee2e565a/cinder-api/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.327280 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1/cinder-scheduler/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.419028 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cd9b32e6-4f78-4f9c-9fbd-e91b37d110a1/probe/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.556566 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_717b3122-e7c6-4cbe-8528-4b582dd7adc5/probe/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.853318 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_717b3122-e7c6-4cbe-8528-4b582dd7adc5/cinder-volume/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.856238 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8268173a-e7be-4edd-a1e8-bed3486b138e/probe/0.log" Feb 19 11:23:40 crc kubenswrapper[4873]: I0219 11:23:40.901617 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8268173a-e7be-4edd-a1e8-bed3486b138e/cinder-volume/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.090599 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-snp5b_f0739ccd-765a-42c4-89b4-de6adf188e24/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.172825 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kzrd2_40ec1f13-0b91-4c7c-a13e-11e60f55e627/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.291412 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/init/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.519114 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/init/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.526260 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2s7zj_ab7d5a49-ac61-4963-8766-1716098f3d4c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.731940 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6c564b89cf-9v87f_20253d93-eafe-45db-b11e-338714ffd978/dnsmasq-dns/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.823906 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_09cfd898-398f-41ae-8c45-1ed215b69683/glance-httpd/0.log" Feb 19 11:23:41 crc kubenswrapper[4873]: I0219 11:23:41.849948 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_09cfd898-398f-41ae-8c45-1ed215b69683/glance-log/0.log" Feb 19 11:23:42 crc kubenswrapper[4873]: I0219 11:23:42.056501 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0df7963-e78f-457c-a27f-45c26232cfa7/glance-httpd/0.log" Feb 19 11:23:42 crc kubenswrapper[4873]: I0219 11:23:42.084468 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c0df7963-e78f-457c-a27f-45c26232cfa7/glance-log/0.log" Feb 19 11:23:42 crc kubenswrapper[4873]: I0219 11:23:42.321171 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6687d9896d-v96j2_fa527f64-6e38-48c2-9927-a319f4579070/horizon/0.log" Feb 19 11:23:42 crc kubenswrapper[4873]: I0219 11:23:42.375259 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2zqn6_537c2ac8-0912-4609-ab4e-760060a78d52/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:42 crc kubenswrapper[4873]: I0219 11:23:42.605667 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s2jwj_4b127e45-b09c-4e11-9423-58f1f51effd4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:43 crc kubenswrapper[4873]: I0219 11:23:43.084742 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6687d9896d-v96j2_fa527f64-6e38-48c2-9927-a319f4579070/horizon-log/0.log" Feb 19 11:23:43 crc kubenswrapper[4873]: I0219 11:23:43.198587 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524981-pxsmx_3f08f0c4-870d-4d9a-8a82-ce22827ce779/keystone-cron/0.log" Feb 19 11:23:43 crc kubenswrapper[4873]: I0219 11:23:43.224834 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5fcd445c48-xvpw4_ed86f09e-909d-451b-96c0-9b4b7b27eb03/keystone-api/0.log" Feb 19 11:23:43 crc kubenswrapper[4873]: I0219 11:23:43.479219 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_84c63c73-45f3-4d27-a3a3-cbfecd9e1810/kube-state-metrics/0.log" Feb 19 11:23:43 crc kubenswrapper[4873]: I0219 11:23:43.532608 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-z5kkf_2baa296e-fb37-4d90-a7e4-68f61006e085/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.019905 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-988bb_a607f592-ebca-4bf5-9e98-f9e2bc131ff1/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.038703 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76cc4fb9fc-vdfd4_f168d086-aaa7-4a6e-9a65-5ab28e10a7e8/neutron-httpd/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.179310 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76cc4fb9fc-vdfd4_f168d086-aaa7-4a6e-9a65-5ab28e10a7e8/neutron-api/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.295715 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/setup-container/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.582287 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/setup-container/0.log" Feb 19 11:23:44 crc kubenswrapper[4873]: I0219 11:23:44.621055 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da89f0ff-c51c-4c4a-8df4-f7787d29ddd2/rabbitmq/0.log" Feb 19 11:23:45 crc kubenswrapper[4873]: I0219 11:23:45.395187 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c25b9f1f-0533-4e00-a926-08639b1b2266/nova-cell0-conductor-conductor/0.log" Feb 19 11:23:45 crc kubenswrapper[4873]: I0219 11:23:45.483893 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:23:45 crc kubenswrapper[4873]: E0219 11:23:45.484268 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:23:45 crc kubenswrapper[4873]: I0219 11:23:45.757196 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0688136a-f0b5-4a2a-8f08-9c99d9c3644c/nova-cell1-conductor-conductor/0.log" Feb 19 11:23:46 crc kubenswrapper[4873]: I0219 11:23:46.108525 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cf46452a-f49d-48ab-a235-9e96f89c931f/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 11:23:46 crc kubenswrapper[4873]: I0219 11:23:46.428172 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4f4e613e-0a31-4191-9afb-4fd0300586f9/nova-api-log/0.log" Feb 19 11:23:46 crc kubenswrapper[4873]: I0219 11:23:46.445006 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v25t6_ce5f426d-554a-469a-be1e-e3e1b9bfa68e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:46 crc kubenswrapper[4873]: I0219 11:23:46.762313 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15cbab3c-9843-4bf6-b0e8-b65dec1e5112/nova-metadata-log/0.log" Feb 19 11:23:46 crc kubenswrapper[4873]: I0219 11:23:46.930403 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4f4e613e-0a31-4191-9afb-4fd0300586f9/nova-api-api/0.log" Feb 19 11:23:47 crc kubenswrapper[4873]: I0219 11:23:47.419779 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/mysql-bootstrap/0.log" Feb 19 11:23:47 crc kubenswrapper[4873]: I0219 11:23:47.479018 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_adb0395e-00f8-4bc6-a0a6-2b956235c58c/nova-scheduler-scheduler/0.log" Feb 19 11:23:47 crc kubenswrapper[4873]: I0219 11:23:47.683653 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/mysql-bootstrap/0.log" Feb 19 11:23:47 crc kubenswrapper[4873]: I0219 11:23:47.755206 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e3385c22-baa0-4261-b498-6a09c8768520/galera/0.log" Feb 19 11:23:47 crc kubenswrapper[4873]: I0219 11:23:47.907852 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/mysql-bootstrap/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.101299 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/mysql-bootstrap/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.123705 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f1b4e4e4-15bf-4c4d-b7c4-bc3029c32964/galera/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.380235 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5c4eb2b5-d272-49ff-938e-3e3359d29f46/openstackclient/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.449935 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-djxfb_888c3336-cd8a-4bf2-805f-6b473fb272f4/openstack-network-exporter/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.599672 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server-init/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.860954 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server-init/0.log" Feb 19 11:23:48 crc kubenswrapper[4873]: I0219 11:23:48.870677 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovsdb-server/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.107912 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vsnt5_b0ab9d21-0c11-4940-ad43-3e20c46012ad/ovn-controller/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.283639 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t5bgp_de2f2331-fc83-420b-9e1b-fe08998cb0ab/ovs-vswitchd/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.370742 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_15cbab3c-9843-4bf6-b0e8-b65dec1e5112/nova-metadata-metadata/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.408317 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dks5c_f5d576b5-56dd-4f9f-b67b-0ee87213ea78/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.540739 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd6df8e5-8bc5-4bd5-b466-a90642932cc2/openstack-network-exporter/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.661981 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd6df8e5-8bc5-4bd5-b466-a90642932cc2/ovn-northd/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.752380 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4574f6e3-d697-424c-a9f1-7b74afb82324/openstack-network-exporter/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.910692 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4574f6e3-d697-424c-a9f1-7b74afb82324/ovsdbserver-nb/0.log" Feb 19 11:23:49 crc kubenswrapper[4873]: I0219 11:23:49.910923 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_877efa5f-4357-4396-8805-729237cd4e8f/openstack-network-exporter/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.009914 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_877efa5f-4357-4396-8805-729237cd4e8f/ovsdbserver-sb/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.362084 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6696d67b98-wrvnm_c5d4dde9-793b-403e-8701-84cca6a509e1/placement-api/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.419867 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/init-config-reloader/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.442945 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6696d67b98-wrvnm_c5d4dde9-793b-403e-8701-84cca6a509e1/placement-log/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.643208 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/config-reloader/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.646215 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/init-config-reloader/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.648918 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/prometheus/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.735031 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ae630a8f-ee42-4f96-adb9-d18bf713af37/thanos-sidecar/0.log" Feb 19 11:23:50 crc kubenswrapper[4873]: I0219 11:23:50.870803 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/setup-container/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.136706 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/setup-container/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.184799 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1150426f-909f-4b05-b216-ccf29f7039eb/rabbitmq/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.206817 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/setup-container/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.394875 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/setup-container/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.428236 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nw9vj_157ee933-b692-4c92-bcbd-967bc1cd377c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.493727 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d564a6d4-4702-4e96-b814-8d9f01db02e5/rabbitmq/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.554067 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_21bb5d7d-6565-484a-af2d-0edcff2729b3/memcached/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.928217 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mt2n6_3ba1c3b5-6b1a-4d7e-bbdd-fb492abd6647/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:51 crc kubenswrapper[4873]: I0219 11:23:51.954372 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mxmpn_fda37ba3-82f5-4d49-a15f-4dca53649ec7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.098528 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5wvjf_7843f72c-5559-44d6-86e0-62f013e0a073/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.164206 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sw8hj_15999617-f2b4-4a3f-911d-422db799fa37/ssh-known-hosts-edpm-deployment/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.386090 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c6d694569-qbpxm_d51beb70-e455-4e75-9e06-863b41fbf9a8/proxy-server/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.419769 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c6d694569-qbpxm_d51beb70-e455-4e75-9e06-863b41fbf9a8/proxy-httpd/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.449170 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mx6qq_91fbca18-847d-4e7b-8a40-e52dd348d155/swift-ring-rebalance/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.617617 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-reaper/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.643515 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-auditor/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.646506 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-replicator/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.665011 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/account-server/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.687545 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-auditor/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.901859 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-auditor/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.904266 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-server/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.907658 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-updater/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.912521 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/container-replicator/0.log" Feb 19 11:23:52 crc kubenswrapper[4873]: I0219 11:23:52.962844 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-expirer/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.129535 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-server/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.133645 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/rsync/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.138196 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-updater/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.156308 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/object-replicator/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.185715 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c3b21a02-7162-42ca-84cf-e0fa36b04a22/swift-recon-cron/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.358807 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4bwqz_bf143721-2963-4009-8e23-0c283b4a88a3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.388460 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5e5a79da-a068-4a68-ba79-6719ea0fb353/tempest-tests-tempest-tests-runner/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.526127 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_58738a83-0734-4889-9b0e-650e43f6dbb7/test-operator-logs-container/0.log" Feb 19 11:23:53 crc kubenswrapper[4873]: I0219 11:23:53.629863 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2nnwh_28f40398-582f-40ed-92b8-2ff5a19d138d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:23:54 crc kubenswrapper[4873]: I0219 11:23:54.319151 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_3d0e231c-7848-4f57-a28b-dfec3c87b617/watcher-applier/0.log" Feb 19 11:23:55 crc kubenswrapper[4873]: I0219 11:23:55.034593 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9fb835f9-7ac4-4212-a372-b793c2fb8afd/watcher-api-log/0.log" Feb 19 11:23:56 crc kubenswrapper[4873]: I0219 11:23:56.483991 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:23:56 crc kubenswrapper[4873]: E0219 11:23:56.484302 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:23:57 crc kubenswrapper[4873]: I0219 11:23:57.415626 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_3ecf8671-28f5-4549-a4c1-0cdad8421837/watcher-decision-engine/0.log" Feb 19 11:23:58 crc kubenswrapper[4873]: I0219 11:23:58.343450 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_9fb835f9-7ac4-4212-a372-b793c2fb8afd/watcher-api/0.log" Feb 19 11:24:09 crc kubenswrapper[4873]: I0219 11:24:09.488462 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:24:09 crc kubenswrapper[4873]: E0219 11:24:09.489199 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:24:21 crc kubenswrapper[4873]: I0219 11:24:21.490985 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:24:21 crc kubenswrapper[4873]: E0219 11:24:21.491857 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:24:22 crc kubenswrapper[4873]: I0219 11:24:22.884620 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.060701 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.075071 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.082497 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.340745 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/pull/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.363305 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/extract/0.log" Feb 19 11:24:23 crc kubenswrapper[4873]: I0219 11:24:23.395621 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8e5457509d36cb248fb3a6842d85c029cfff46a5712e7fc5aa077e58cemtmp6_78582e6c-dedc-4608-a542-6837184954ab/util/0.log" Feb 19 11:24:24 crc kubenswrapper[4873]: I0219 11:24:24.161136 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-t54x9_f108f6ea-4506-48bf-b948-e367078c3dce/manager/0.log" Feb 19 11:24:24 crc kubenswrapper[4873]: I0219 11:24:24.553036 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-vgxsl_43531003-74d3-43b9-b0f5-6fca42b21975/manager/0.log" Feb 19 11:24:24 crc kubenswrapper[4873]: I0219 11:24:24.624834 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-vwx5n_8d4b6c84-e5ed-4761-b7c7-95b21da856f7/manager/0.log" Feb 19 11:24:24 crc kubenswrapper[4873]: I0219 11:24:24.892554 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-r9b5b_2b1c8872-b310-4994-819c-a8e472d8e522/manager/0.log" Feb 19 11:24:25 crc kubenswrapper[4873]: I0219 11:24:25.521428 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-f86jr_aeccf47e-b953-4036-b271-be284b9ab385/manager/0.log" Feb 19 11:24:25 crc kubenswrapper[4873]: I0219 11:24:25.797318 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-4t46s_3ff0155f-08fd-42f5-9b31-c3b9a7cefefe/manager/0.log" Feb 19 11:24:26 crc kubenswrapper[4873]: I0219 11:24:26.054228 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-t7mwr_ecf3484a-026e-4655-bfa8-e5292e2f62c5/manager/0.log" Feb 19 11:24:26 crc kubenswrapper[4873]: I0219 11:24:26.284972 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-t2hfl_e4172fa9-b04e-4894-82d6-ec65ea92b004/manager/0.log" Feb 19 11:24:26 crc kubenswrapper[4873]: I0219 11:24:26.513217 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8v7q6_588098b3-662f-4f6f-914c-8cb28e055ccd/manager/0.log" Feb 19 11:24:26 crc kubenswrapper[4873]: I0219 11:24:26.803686 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-cx7xf_2e7ca3f2-f73b-4bac-93bb-68b2518d956e/manager/0.log" Feb 19 11:24:26 crc kubenswrapper[4873]: I0219 11:24:26.832255 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-d6h72_c471d099-fa02-4463-9eb9-9d0f6a3832e6/manager/0.log" Feb 19 11:24:27 crc kubenswrapper[4873]: I0219 11:24:27.180658 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-n6djt_8eec8859-f388-4d81-bbce-0433a66a1ef7/manager/0.log" Feb 19 11:24:27 crc kubenswrapper[4873]: I0219 11:24:27.425328 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cr64hv_515c6c0c-ae00-4ae1-ab3f-e22e5a585681/manager/0.log" Feb 19 11:24:27 crc kubenswrapper[4873]: I0219 11:24:27.669467 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8476bb6847-rv4sx_e18b6851-e022-488e-bd95-27d1659f2761/operator/0.log" Feb 19 11:24:27 crc kubenswrapper[4873]: I0219 11:24:27.931778 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p62rb_0144fe1c-ef13-4b4e-8cda-ddc72e2516bb/registry-server/0.log" Feb 19 11:24:28 crc kubenswrapper[4873]: I0219 11:24:28.244157 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-db4dr_dc53742c-7e71-49fa-9378-b26036c80275/manager/0.log" Feb 19 11:24:28 crc kubenswrapper[4873]: I0219 11:24:28.487672 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-6hpwv_74e9952e-50ef-4389-aa77-8f6e9cc790a8/manager/0.log" Feb 19 11:24:28 crc kubenswrapper[4873]: I0219 11:24:28.708705 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lcnz4_9574bff7-0aac-4a24-b69f-135ff968422e/operator/0.log" Feb 19 11:24:28 crc kubenswrapper[4873]: I0219 11:24:28.963892 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-r74rt_1f098ace-bbc4-46ee-8e72-ab65a59851eb/manager/0.log" Feb 19 11:24:29 crc kubenswrapper[4873]: I0219 11:24:29.521021 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-2szzj_e139553a-a68d-424d-95b5-9093ea05440b/manager/0.log" Feb 19 11:24:29 crc kubenswrapper[4873]: I0219 11:24:29.546235 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-g22tc_0e9da99c-56ee-4353-9378-c59a2c4e1608/manager/0.log" Feb 19 11:24:29 crc kubenswrapper[4873]: I0219 11:24:29.964443 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7d767c64df-hld6w_e827e28d-ffd8-4f59-82bf-a6db1dab5413/manager/0.log" Feb 19 11:24:30 crc kubenswrapper[4873]: I0219 11:24:30.136337 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77c7c45f98-q8khx_26f0a6ea-18fb-411a-b193-83938a4bbe19/manager/0.log" Feb 19 11:24:30 crc kubenswrapper[4873]: I0219 11:24:30.200958 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-t9kgf_080befba-c501-4f84-8644-6b9fda0d8d5f/manager/0.log" Feb 19 11:24:33 crc kubenswrapper[4873]: I0219 11:24:33.483708 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:24:33 crc kubenswrapper[4873]: E0219 11:24:33.484552 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:24:36 crc kubenswrapper[4873]: I0219 11:24:36.353138 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-hqmvw_d53d2bae-fcdd-408c-9950-440e841cc035/manager/0.log" Feb 19 11:24:45 crc kubenswrapper[4873]: I0219 11:24:45.484973 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:24:45 crc kubenswrapper[4873]: E0219 11:24:45.485823 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:24:50 crc kubenswrapper[4873]: I0219 11:24:50.926399 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s67xb_d639ff25-343e-4e7c-bd2e-f5fc533923f4/control-plane-machine-set-operator/0.log" Feb 19 11:24:51 crc kubenswrapper[4873]: I0219 11:24:51.435720 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k627b_df659e7d-39ab-41ee-8df5-08896976666c/machine-api-operator/0.log" Feb 19 11:24:51 crc kubenswrapper[4873]: I0219 11:24:51.463247 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k627b_df659e7d-39ab-41ee-8df5-08896976666c/kube-rbac-proxy/0.log" Feb 19 11:24:58 crc kubenswrapper[4873]: I0219 11:24:58.484051 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:24:58 crc kubenswrapper[4873]: E0219 11:24:58.484851 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:25:04 crc kubenswrapper[4873]: I0219 11:25:04.702716 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ckd42_51fc361b-11a5-480a-a5b9-0eb4b7670e83/cert-manager-controller/0.log" Feb 19 11:25:04 crc kubenswrapper[4873]: I0219 11:25:04.883812 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-zhqgv_084c90b4-3270-4f64-8c8c-1a96f05dc1fa/cert-manager-cainjector/0.log" Feb 19 11:25:04 crc kubenswrapper[4873]: I0219 11:25:04.891130 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-fhd9c_2eebe311-368b-45b4-9e74-7442221e3785/cert-manager-webhook/0.log" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.524292 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:07 crc kubenswrapper[4873]: E0219 11:25:07.525137 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd95282-b63c-48c7-beaa-96e7112a6bd1" containerName="container-00" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.525153 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd95282-b63c-48c7-beaa-96e7112a6bd1" containerName="container-00" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.525396 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd95282-b63c-48c7-beaa-96e7112a6bd1" containerName="container-00" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.527073 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.549944 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.603949 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdqsh\" (UniqueName: \"kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.604058 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.604234 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.706471 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.706570 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdqsh\" (UniqueName: \"kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.706634 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.707067 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.707191 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.724548 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdqsh\" (UniqueName: \"kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh\") pod \"redhat-operators-vh782\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:07 crc kubenswrapper[4873]: I0219 11:25:07.862049 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:08 crc kubenswrapper[4873]: I0219 11:25:08.358992 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:08 crc kubenswrapper[4873]: I0219 11:25:08.447780 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerStarted","Data":"b05d7372ec51e0442e4648a2dc4cd081b94c1b9c9a02e0ea0e267bcfe4e90abb"} Feb 19 11:25:09 crc kubenswrapper[4873]: I0219 11:25:09.459987 4873 generic.go:334] "Generic (PLEG): container finished" podID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerID="edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4" exitCode=0 Feb 19 11:25:09 crc kubenswrapper[4873]: I0219 11:25:09.460029 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerDied","Data":"edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4"} Feb 19 11:25:10 crc kubenswrapper[4873]: I0219 11:25:10.922184 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:10 crc kubenswrapper[4873]: I0219 11:25:10.924496 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:10 crc kubenswrapper[4873]: I0219 11:25:10.944596 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.097745 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9p5p\" (UniqueName: \"kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.098172 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.098195 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.199471 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9p5p\" (UniqueName: \"kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.199675 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.199696 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.200235 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.200687 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.223128 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9p5p\" (UniqueName: \"kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p\") pod \"redhat-marketplace-sxbs9\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.247294 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.516085 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerStarted","Data":"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55"} Feb 19 11:25:11 crc kubenswrapper[4873]: I0219 11:25:11.799359 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:12 crc kubenswrapper[4873]: I0219 11:25:12.484266 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:25:12 crc kubenswrapper[4873]: E0219 11:25:12.484944 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:25:12 crc kubenswrapper[4873]: I0219 11:25:12.513661 4873 generic.go:334] "Generic (PLEG): container finished" podID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerID="5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993" exitCode=0 Feb 19 11:25:12 crc kubenswrapper[4873]: I0219 11:25:12.513711 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerDied","Data":"5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993"} Feb 19 11:25:12 crc kubenswrapper[4873]: I0219 11:25:12.513758 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerStarted","Data":"6d80a83b03b357f5f4bbd1bca810d6a76cb6aaf294b677f996036db6833706e1"} Feb 19 11:25:14 crc kubenswrapper[4873]: I0219 11:25:14.534709 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerStarted","Data":"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af"} Feb 19 11:25:16 crc kubenswrapper[4873]: I0219 11:25:16.560928 4873 generic.go:334] "Generic (PLEG): container finished" podID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerID="c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af" exitCode=0 Feb 19 11:25:16 crc kubenswrapper[4873]: I0219 11:25:16.560991 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerDied","Data":"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af"} Feb 19 11:25:17 crc kubenswrapper[4873]: I0219 11:25:17.573720 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerStarted","Data":"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da"} Feb 19 11:25:17 crc kubenswrapper[4873]: I0219 11:25:17.600117 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxbs9" podStartSLOduration=3.148563052 podStartE2EDuration="7.600078902s" podCreationTimestamp="2026-02-19 11:25:10 +0000 UTC" firstStartedPulling="2026-02-19 11:25:12.515746077 +0000 UTC m=+6021.805177735" lastFinishedPulling="2026-02-19 11:25:16.967261947 +0000 UTC m=+6026.256693585" observedRunningTime="2026-02-19 11:25:17.59763577 +0000 UTC m=+6026.887067408" watchObservedRunningTime="2026-02-19 11:25:17.600078902 +0000 UTC m=+6026.889510540" Feb 19 11:25:18 crc kubenswrapper[4873]: I0219 11:25:18.600230 4873 generic.go:334] "Generic (PLEG): container finished" podID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerID="4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55" exitCode=0 Feb 19 11:25:18 crc kubenswrapper[4873]: I0219 11:25:18.600280 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerDied","Data":"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55"} Feb 19 11:25:19 crc kubenswrapper[4873]: I0219 11:25:19.565933 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-9cr2m_9b3c6348-1c17-4774-9739-7a1dd3021d81/nmstate-console-plugin/0.log" Feb 19 11:25:19 crc kubenswrapper[4873]: I0219 11:25:19.615435 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerStarted","Data":"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819"} Feb 19 11:25:19 crc kubenswrapper[4873]: I0219 11:25:19.640185 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vh782" podStartSLOduration=3.088485794 podStartE2EDuration="12.640164627s" podCreationTimestamp="2026-02-19 11:25:07 +0000 UTC" firstStartedPulling="2026-02-19 11:25:09.461931018 +0000 UTC m=+6018.751362656" lastFinishedPulling="2026-02-19 11:25:19.013609851 +0000 UTC m=+6028.303041489" observedRunningTime="2026-02-19 11:25:19.635485489 +0000 UTC m=+6028.924917127" watchObservedRunningTime="2026-02-19 11:25:19.640164627 +0000 UTC m=+6028.929596265" Feb 19 11:25:19 crc kubenswrapper[4873]: I0219 11:25:19.856371 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-75txf_62408ce4-73ce-4726-91c1-96f645c39dee/nmstate-handler/0.log" Feb 19 11:25:19 crc kubenswrapper[4873]: I0219 11:25:19.974278 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8jgss_3b960434-ef37-45ae-aa50-8d719c8e2df5/kube-rbac-proxy/0.log" Feb 19 11:25:20 crc kubenswrapper[4873]: I0219 11:25:20.146540 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8jgss_3b960434-ef37-45ae-aa50-8d719c8e2df5/nmstate-metrics/0.log" Feb 19 11:25:20 crc kubenswrapper[4873]: I0219 11:25:20.183511 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-qlgxw_f7f28c8a-4571-485c-96a2-fc1c5856e3ea/nmstate-operator/0.log" Feb 19 11:25:20 crc kubenswrapper[4873]: I0219 11:25:20.371235 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-nfh8w_7af074a2-c1f7-4253-8efc-065748e0452b/nmstate-webhook/0.log" Feb 19 11:25:21 crc kubenswrapper[4873]: I0219 11:25:21.248137 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:21 crc kubenswrapper[4873]: I0219 11:25:21.248466 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:22 crc kubenswrapper[4873]: I0219 11:25:22.307847 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sxbs9" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="registry-server" probeResult="failure" output=< Feb 19 11:25:22 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:25:22 crc kubenswrapper[4873]: > Feb 19 11:25:25 crc kubenswrapper[4873]: I0219 11:25:25.485228 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:25:26 crc kubenswrapper[4873]: I0219 11:25:26.690920 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3"} Feb 19 11:25:27 crc kubenswrapper[4873]: I0219 11:25:27.862263 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:27 crc kubenswrapper[4873]: I0219 11:25:27.862621 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:28 crc kubenswrapper[4873]: I0219 11:25:28.906662 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh782" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" probeResult="failure" output=< Feb 19 11:25:28 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:25:28 crc kubenswrapper[4873]: > Feb 19 11:25:31 crc kubenswrapper[4873]: I0219 11:25:31.298339 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:31 crc kubenswrapper[4873]: I0219 11:25:31.352506 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:31 crc kubenswrapper[4873]: I0219 11:25:31.539765 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:32 crc kubenswrapper[4873]: I0219 11:25:32.742890 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxbs9" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="registry-server" containerID="cri-o://a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da" gracePeriod=2 Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.297252 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.401225 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9p5p\" (UniqueName: \"kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p\") pod \"e39282f2-483a-457f-9a81-ed6faf0794a2\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.401551 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content\") pod \"e39282f2-483a-457f-9a81-ed6faf0794a2\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.401645 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities\") pod \"e39282f2-483a-457f-9a81-ed6faf0794a2\" (UID: \"e39282f2-483a-457f-9a81-ed6faf0794a2\") " Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.402369 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities" (OuterVolumeSpecName: "utilities") pod "e39282f2-483a-457f-9a81-ed6faf0794a2" (UID: "e39282f2-483a-457f-9a81-ed6faf0794a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.407950 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p" (OuterVolumeSpecName: "kube-api-access-r9p5p") pod "e39282f2-483a-457f-9a81-ed6faf0794a2" (UID: "e39282f2-483a-457f-9a81-ed6faf0794a2"). InnerVolumeSpecName "kube-api-access-r9p5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.451261 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e39282f2-483a-457f-9a81-ed6faf0794a2" (UID: "e39282f2-483a-457f-9a81-ed6faf0794a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.503746 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.503792 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9p5p\" (UniqueName: \"kubernetes.io/projected/e39282f2-483a-457f-9a81-ed6faf0794a2-kube-api-access-r9p5p\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.503806 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39282f2-483a-457f-9a81-ed6faf0794a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.756528 4873 generic.go:334] "Generic (PLEG): container finished" podID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerID="a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da" exitCode=0 Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.756629 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerDied","Data":"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da"} Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.756923 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxbs9" event={"ID":"e39282f2-483a-457f-9a81-ed6faf0794a2","Type":"ContainerDied","Data":"6d80a83b03b357f5f4bbd1bca810d6a76cb6aaf294b677f996036db6833706e1"} Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.756656 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxbs9" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.756959 4873 scope.go:117] "RemoveContainer" containerID="a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.797640 4873 scope.go:117] "RemoveContainer" containerID="c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.798361 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.812701 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxbs9"] Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.817181 4873 scope.go:117] "RemoveContainer" containerID="5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.873775 4873 scope.go:117] "RemoveContainer" containerID="a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da" Feb 19 11:25:33 crc kubenswrapper[4873]: E0219 11:25:33.874290 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da\": container with ID starting with a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da not found: ID does not exist" containerID="a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.874356 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da"} err="failed to get container status \"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da\": rpc error: code = NotFound desc = could not find container \"a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da\": container with ID starting with a622611ca13929d3f89f9fe1cf634a4f477dfef54420733eea69ed9789d819da not found: ID does not exist" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.874384 4873 scope.go:117] "RemoveContainer" containerID="c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af" Feb 19 11:25:33 crc kubenswrapper[4873]: E0219 11:25:33.874731 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af\": container with ID starting with c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af not found: ID does not exist" containerID="c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.874772 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af"} err="failed to get container status \"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af\": rpc error: code = NotFound desc = could not find container \"c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af\": container with ID starting with c6651c8df75a6ece9b8f0a0290c6230600543e4bbf34a68e0e3e25ac6e16e8af not found: ID does not exist" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.874803 4873 scope.go:117] "RemoveContainer" containerID="5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993" Feb 19 11:25:33 crc kubenswrapper[4873]: E0219 11:25:33.875211 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993\": container with ID starting with 5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993 not found: ID does not exist" containerID="5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993" Feb 19 11:25:33 crc kubenswrapper[4873]: I0219 11:25:33.875255 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993"} err="failed to get container status \"5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993\": rpc error: code = NotFound desc = could not find container \"5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993\": container with ID starting with 5f164c85151d9a6b6ad768d870ce77d344a5947bf1dcc7d03e716b30ff844993 not found: ID does not exist" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.345759 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:34 crc kubenswrapper[4873]: E0219 11:25:34.346222 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="registry-server" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.346243 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="registry-server" Feb 19 11:25:34 crc kubenswrapper[4873]: E0219 11:25:34.346287 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="extract-utilities" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.346296 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="extract-utilities" Feb 19 11:25:34 crc kubenswrapper[4873]: E0219 11:25:34.346321 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="extract-content" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.346329 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="extract-content" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.346560 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" containerName="registry-server" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.348462 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.393696 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.419225 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.419297 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nkc\" (UniqueName: \"kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.419369 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.521548 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.521965 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nkc\" (UniqueName: \"kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.522039 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.522735 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.522822 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.544598 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nkc\" (UniqueName: \"kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc\") pod \"community-operators-vfjzm\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.671207 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:34 crc kubenswrapper[4873]: I0219 11:25:34.808969 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7nww_5d79d4d8-e595-4aec-bc0b-7347b826c257/prometheus-operator/0.log" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.123272 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-qptdb_4724c979-0040-4017-86ce-78d2a8bdb44e/prometheus-operator-admission-webhook/0.log" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.142724 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7_3180318c-7d9a-454b-8de4-887fabae362b/prometheus-operator-admission-webhook/0.log" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.298047 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:35 crc kubenswrapper[4873]: W0219 11:25:35.301978 4873 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e9e14ea_5f94_4828_9b90_06e1b92b6e87.slice/crio-b5bed3859900a512dacb242a8fff7e5f63835fd00ecc36327832ce7b28da3a82 WatchSource:0}: Error finding container b5bed3859900a512dacb242a8fff7e5f63835fd00ecc36327832ce7b28da3a82: Status 404 returned error can't find the container with id b5bed3859900a512dacb242a8fff7e5f63835fd00ecc36327832ce7b28da3a82 Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.370076 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7wtlv_b23281d2-935e-47c1-bc83-8d00c7649625/operator/0.log" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.407674 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8sflg_ea1cc2c7-c932-4b3d-b718-d017eb06163f/perses-operator/0.log" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.499157 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39282f2-483a-457f-9a81-ed6faf0794a2" path="/var/lib/kubelet/pods/e39282f2-483a-457f-9a81-ed6faf0794a2/volumes" Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.812130 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerID="5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e" exitCode=0 Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.812284 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerDied","Data":"5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e"} Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.813196 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerStarted","Data":"b5bed3859900a512dacb242a8fff7e5f63835fd00ecc36327832ce7b28da3a82"} Feb 19 11:25:35 crc kubenswrapper[4873]: I0219 11:25:35.814470 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:25:36 crc kubenswrapper[4873]: I0219 11:25:36.824564 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerStarted","Data":"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea"} Feb 19 11:25:38 crc kubenswrapper[4873]: I0219 11:25:38.850926 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerID="3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea" exitCode=0 Feb 19 11:25:38 crc kubenswrapper[4873]: I0219 11:25:38.851360 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerDied","Data":"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea"} Feb 19 11:25:38 crc kubenswrapper[4873]: I0219 11:25:38.915139 4873 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh782" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" probeResult="failure" output=< Feb 19 11:25:38 crc kubenswrapper[4873]: timeout: failed to connect service ":50051" within 1s Feb 19 11:25:38 crc kubenswrapper[4873]: > Feb 19 11:25:39 crc kubenswrapper[4873]: I0219 11:25:39.867389 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerStarted","Data":"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4"} Feb 19 11:25:39 crc kubenswrapper[4873]: I0219 11:25:39.887327 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfjzm" podStartSLOduration=2.437900938 podStartE2EDuration="5.887304006s" podCreationTimestamp="2026-02-19 11:25:34 +0000 UTC" firstStartedPulling="2026-02-19 11:25:35.814228353 +0000 UTC m=+6045.103659991" lastFinishedPulling="2026-02-19 11:25:39.263631411 +0000 UTC m=+6048.553063059" observedRunningTime="2026-02-19 11:25:39.885175112 +0000 UTC m=+6049.174606770" watchObservedRunningTime="2026-02-19 11:25:39.887304006 +0000 UTC m=+6049.176735644" Feb 19 11:25:44 crc kubenswrapper[4873]: I0219 11:25:44.672407 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:44 crc kubenswrapper[4873]: I0219 11:25:44.673477 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:44 crc kubenswrapper[4873]: I0219 11:25:44.715339 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:44 crc kubenswrapper[4873]: I0219 11:25:44.976612 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:45 crc kubenswrapper[4873]: I0219 11:25:45.021142 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:46 crc kubenswrapper[4873]: I0219 11:25:46.931644 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vfjzm" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="registry-server" containerID="cri-o://b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4" gracePeriod=2 Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.436759 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.496531 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nkc\" (UniqueName: \"kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc\") pod \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.496660 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content\") pod \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.496739 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities\") pod \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\" (UID: \"8e9e14ea-5f94-4828-9b90-06e1b92b6e87\") " Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.498246 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities" (OuterVolumeSpecName: "utilities") pod "8e9e14ea-5f94-4828-9b90-06e1b92b6e87" (UID: "8e9e14ea-5f94-4828-9b90-06e1b92b6e87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.503447 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc" (OuterVolumeSpecName: "kube-api-access-p6nkc") pod "8e9e14ea-5f94-4828-9b90-06e1b92b6e87" (UID: "8e9e14ea-5f94-4828-9b90-06e1b92b6e87"). InnerVolumeSpecName "kube-api-access-p6nkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.564762 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9e14ea-5f94-4828-9b90-06e1b92b6e87" (UID: "8e9e14ea-5f94-4828-9b90-06e1b92b6e87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.599871 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.599903 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.599915 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nkc\" (UniqueName: \"kubernetes.io/projected/8e9e14ea-5f94-4828-9b90-06e1b92b6e87-kube-api-access-p6nkc\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.932404 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.945849 4873 generic.go:334] "Generic (PLEG): container finished" podID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerID="b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4" exitCode=0 Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.945894 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerDied","Data":"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4"} Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.945922 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfjzm" event={"ID":"8e9e14ea-5f94-4828-9b90-06e1b92b6e87","Type":"ContainerDied","Data":"b5bed3859900a512dacb242a8fff7e5f63835fd00ecc36327832ce7b28da3a82"} Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.945943 4873 scope.go:117] "RemoveContainer" containerID="b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.946124 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfjzm" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.985881 4873 scope.go:117] "RemoveContainer" containerID="3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea" Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.991325 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:47 crc kubenswrapper[4873]: I0219 11:25:47.993726 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.002191 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vfjzm"] Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.013238 4873 scope.go:117] "RemoveContainer" containerID="5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.082774 4873 scope.go:117] "RemoveContainer" containerID="b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4" Feb 19 11:25:48 crc kubenswrapper[4873]: E0219 11:25:48.086659 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4\": container with ID starting with b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4 not found: ID does not exist" containerID="b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.086708 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4"} err="failed to get container status \"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4\": rpc error: code = NotFound desc = could not find container \"b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4\": container with ID starting with b6bc77f409177594424c5b1375a9ee42663caf3262a2565a4e7e5b184266f6e4 not found: ID does not exist" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.086737 4873 scope.go:117] "RemoveContainer" containerID="3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea" Feb 19 11:25:48 crc kubenswrapper[4873]: E0219 11:25:48.087180 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea\": container with ID starting with 3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea not found: ID does not exist" containerID="3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.087203 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea"} err="failed to get container status \"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea\": rpc error: code = NotFound desc = could not find container \"3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea\": container with ID starting with 3118c73605c071b588191b2928a7c45823fab027d88c7b1c6466031a472caeea not found: ID does not exist" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.087216 4873 scope.go:117] "RemoveContainer" containerID="5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e" Feb 19 11:25:48 crc kubenswrapper[4873]: E0219 11:25:48.087450 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e\": container with ID starting with 5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e not found: ID does not exist" containerID="5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.087474 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e"} err="failed to get container status \"5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e\": rpc error: code = NotFound desc = could not find container \"5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e\": container with ID starting with 5973cfd0614ad827001f3dd79c532a2ada6c7975a5ffe3c0aec2b940fa38f78e not found: ID does not exist" Feb 19 11:25:48 crc kubenswrapper[4873]: I0219 11:25:48.551648 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.272644 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7t964_4a42b4a3-c207-40a8-80b9-0532a0ec2865/kube-rbac-proxy/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.325815 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7t964_4a42b4a3-c207-40a8-80b9-0532a0ec2865/controller/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.471575 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.498427 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" path="/var/lib/kubelet/pods/8e9e14ea-5f94-4828-9b90-06e1b92b6e87/volumes" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.689258 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.698229 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.713662 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.720355 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.965142 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vh782" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" containerID="cri-o://5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819" gracePeriod=2 Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.972236 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.972252 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:25:49 crc kubenswrapper[4873]: I0219 11:25:49.980191 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.052192 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.239777 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-reloader/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.241409 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-frr-files/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.258029 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/cp-metrics/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.305042 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/controller/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.481719 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/kube-rbac-proxy/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.499529 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/kube-rbac-proxy-frr/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.508919 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.556789 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/frr-metrics/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.668797 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdqsh\" (UniqueName: \"kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh\") pod \"2f11d216-e951-49a8-9728-9348dd3e09ab\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.669036 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content\") pod \"2f11d216-e951-49a8-9728-9348dd3e09ab\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.669341 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities\") pod \"2f11d216-e951-49a8-9728-9348dd3e09ab\" (UID: \"2f11d216-e951-49a8-9728-9348dd3e09ab\") " Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.670411 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities" (OuterVolumeSpecName: "utilities") pod "2f11d216-e951-49a8-9728-9348dd3e09ab" (UID: "2f11d216-e951-49a8-9728-9348dd3e09ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.684134 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh" (OuterVolumeSpecName: "kube-api-access-wdqsh") pod "2f11d216-e951-49a8-9728-9348dd3e09ab" (UID: "2f11d216-e951-49a8-9728-9348dd3e09ab"). InnerVolumeSpecName "kube-api-access-wdqsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.755290 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/reloader/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.773647 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.773882 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdqsh\" (UniqueName: \"kubernetes.io/projected/2f11d216-e951-49a8-9728-9348dd3e09ab-kube-api-access-wdqsh\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.809777 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f11d216-e951-49a8-9728-9348dd3e09ab" (UID: "2f11d216-e951-49a8-9728-9348dd3e09ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.816438 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-xwr52_8d8f9aee-601f-4530-876b-83709311196b/frr-k8s-webhook-server/0.log" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.875678 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11d216-e951-49a8-9728-9348dd3e09ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.978736 4873 generic.go:334] "Generic (PLEG): container finished" podID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerID="5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819" exitCode=0 Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.978799 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerDied","Data":"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819"} Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.979136 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh782" event={"ID":"2f11d216-e951-49a8-9728-9348dd3e09ab","Type":"ContainerDied","Data":"b05d7372ec51e0442e4648a2dc4cd081b94c1b9c9a02e0ea0e267bcfe4e90abb"} Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.979244 4873 scope.go:117] "RemoveContainer" containerID="5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819" Feb 19 11:25:50 crc kubenswrapper[4873]: I0219 11:25:50.978818 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh782" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.010331 4873 scope.go:117] "RemoveContainer" containerID="4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.037938 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.047915 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vh782"] Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.075997 4873 scope.go:117] "RemoveContainer" containerID="edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.082080 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6897955989-f6tl8_94f344cf-0f09-4812-ab40-dcce7f260a53/manager/0.log" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.098266 4873 scope.go:117] "RemoveContainer" containerID="5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819" Feb 19 11:25:51 crc kubenswrapper[4873]: E0219 11:25:51.099252 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819\": container with ID starting with 5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819 not found: ID does not exist" containerID="5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.099291 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819"} err="failed to get container status \"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819\": rpc error: code = NotFound desc = could not find container \"5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819\": container with ID starting with 5f8392fe03ecbd700fd918ae9af7167011d155c907cde8e0623f08621d930819 not found: ID does not exist" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.099316 4873 scope.go:117] "RemoveContainer" containerID="4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55" Feb 19 11:25:51 crc kubenswrapper[4873]: E0219 11:25:51.100563 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55\": container with ID starting with 4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55 not found: ID does not exist" containerID="4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.100597 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55"} err="failed to get container status \"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55\": rpc error: code = NotFound desc = could not find container \"4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55\": container with ID starting with 4b5582085ad38131d740eddcd5c773d88f2d945d5cf67562d6dc6da86ee55b55 not found: ID does not exist" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.100618 4873 scope.go:117] "RemoveContainer" containerID="edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4" Feb 19 11:25:51 crc kubenswrapper[4873]: E0219 11:25:51.103313 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4\": container with ID starting with edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4 not found: ID does not exist" containerID="edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.103360 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4"} err="failed to get container status \"edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4\": rpc error: code = NotFound desc = could not find container \"edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4\": container with ID starting with edb548abbd6f9dd1cc88a622635559640a64f2ec1f91e50fae7650e337dd57f4 not found: ID does not exist" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.315550 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bf7457c95-rq2ph_e9d29e18-f362-478f-911d-ed979e43aae1/webhook-server/0.log" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.412017 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-phsr6_46cac2a1-6c87-4c4e-a73f-92dbee290015/kube-rbac-proxy/0.log" Feb 19 11:25:51 crc kubenswrapper[4873]: I0219 11:25:51.500742 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" path="/var/lib/kubelet/pods/2f11d216-e951-49a8-9728-9348dd3e09ab/volumes" Feb 19 11:25:52 crc kubenswrapper[4873]: I0219 11:25:52.040527 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-phsr6_46cac2a1-6c87-4c4e-a73f-92dbee290015/speaker/0.log" Feb 19 11:25:52 crc kubenswrapper[4873]: I0219 11:25:52.386204 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8fjg_76ea40c9-c4a3-4a32-82a5-d725a73db80d/frr/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.044693 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.179095 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.254292 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.294511 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.456995 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/util/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.479946 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/extract/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.482975 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088qsgn_0709e82b-60e9-4aed-8e42-e39928e74c21/pull/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.659022 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.835392 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.852510 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:26:04 crc kubenswrapper[4873]: I0219 11:26:04.877561 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.032006 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/util/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.041113 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/pull/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.051373 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xwxbf_7a09955d-14f6-4877-bcb4-701d57165495/extract/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.217209 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.424066 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.431278 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.466083 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.610365 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-utilities/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.701324 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/extract-content/0.log" Feb 19 11:26:05 crc kubenswrapper[4873]: I0219 11:26:05.926568 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.067919 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.074262 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.176558 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bdcwz_d27fce7f-0ae7-4e22-885f-ad2a398647cc/registry-server/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.211259 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.388685 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-utilities/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.408746 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/extract-content/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.655910 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.851166 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:26:06 crc kubenswrapper[4873]: I0219 11:26:06.869336 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.013564 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.150067 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/util/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.238292 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/pull/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.291298 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaplfxv_14a07337-b89d-4574-aa0f-f9a3cdebdd48/extract/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.358320 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zk9wc_5f466b31-21ca-4f19-9b73-72cfb7c68d55/registry-server/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.473663 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jt9rj_1d58439b-31c6-44df-a32d-48f0fcb6a361/marketplace-operator/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.596235 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.745739 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.752092 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.788872 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.958214 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-utilities/0.log" Feb 19 11:26:07 crc kubenswrapper[4873]: I0219 11:26:07.997557 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/extract-content/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.163405 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.220983 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xvshp_f9a9b521-3ed0-40c1-b38f-34c21bd9c242/registry-server/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.360745 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.431611 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.443331 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.564058 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-utilities/0.log" Feb 19 11:26:08 crc kubenswrapper[4873]: I0219 11:26:08.587464 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/extract-content/0.log" Feb 19 11:26:09 crc kubenswrapper[4873]: I0219 11:26:09.343937 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prw4c_4cc54252-cfdf-4b71-bfa5-552dcd26500d/registry-server/0.log" Feb 19 11:26:21 crc kubenswrapper[4873]: I0219 11:26:21.066546 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-qptdb_4724c979-0040-4017-86ce-78d2a8bdb44e/prometheus-operator-admission-webhook/0.log" Feb 19 11:26:21 crc kubenswrapper[4873]: I0219 11:26:21.076426 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-v7nww_5d79d4d8-e595-4aec-bc0b-7347b826c257/prometheus-operator/0.log" Feb 19 11:26:21 crc kubenswrapper[4873]: I0219 11:26:21.139066 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7496f9f864-zxpx7_3180318c-7d9a-454b-8de4-887fabae362b/prometheus-operator-admission-webhook/0.log" Feb 19 11:26:21 crc kubenswrapper[4873]: I0219 11:26:21.249478 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7wtlv_b23281d2-935e-47c1-bc83-8d00c7649625/operator/0.log" Feb 19 11:26:21 crc kubenswrapper[4873]: I0219 11:26:21.282653 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8sflg_ea1cc2c7-c932-4b3d-b718-d017eb06163f/perses-operator/0.log" Feb 19 11:27:48 crc kubenswrapper[4873]: I0219 11:27:48.240256 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:27:48 crc kubenswrapper[4873]: I0219 11:27:48.242322 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:28:18 crc kubenswrapper[4873]: I0219 11:28:18.240360 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:28:18 crc kubenswrapper[4873]: I0219 11:28:18.240888 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:28:34 crc kubenswrapper[4873]: I0219 11:28:34.603860 4873 generic.go:334] "Generic (PLEG): container finished" podID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerID="ff3687756e2207c400bf2bbbc9410e3f3ee429ff1507157450e53ef945e6af06" exitCode=0 Feb 19 11:28:34 crc kubenswrapper[4873]: I0219 11:28:34.603965 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" event={"ID":"93cc0682-3903-4dad-a4a1-3e807492bab4","Type":"ContainerDied","Data":"ff3687756e2207c400bf2bbbc9410e3f3ee429ff1507157450e53ef945e6af06"} Feb 19 11:28:34 crc kubenswrapper[4873]: I0219 11:28:34.605043 4873 scope.go:117] "RemoveContainer" containerID="ff3687756e2207c400bf2bbbc9410e3f3ee429ff1507157450e53ef945e6af06" Feb 19 11:28:35 crc kubenswrapper[4873]: I0219 11:28:35.634723 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhpxg_must-gather-vhpx8_93cc0682-3903-4dad-a4a1-3e807492bab4/gather/0.log" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.240590 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.241185 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.241235 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.242079 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.242147 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3" gracePeriod=600 Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.492160 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhpxg/must-gather-vhpx8"] Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.492946 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="copy" containerID="cri-o://767ace8d23069d52fe292289a53031e59a4a02afb307e1f96188d5747c12d9df" gracePeriod=2 Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.542441 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhpxg/must-gather-vhpx8"] Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.765339 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3" exitCode=0 Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.765722 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3"} Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.765759 4873 scope.go:117] "RemoveContainer" containerID="98422463e0fc17a554205c92a5d79c560a2847f0d82d233735f9770c7f6f4838" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.768978 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhpxg_must-gather-vhpx8_93cc0682-3903-4dad-a4a1-3e807492bab4/copy/0.log" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.769431 4873 generic.go:334] "Generic (PLEG): container finished" podID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerID="767ace8d23069d52fe292289a53031e59a4a02afb307e1f96188d5747c12d9df" exitCode=143 Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.932479 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhpxg_must-gather-vhpx8_93cc0682-3903-4dad-a4a1-3e807492bab4/copy/0.log" Feb 19 11:28:48 crc kubenswrapper[4873]: I0219 11:28:48.932991 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.092091 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfvlt\" (UniqueName: \"kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt\") pod \"93cc0682-3903-4dad-a4a1-3e807492bab4\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.092485 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output\") pod \"93cc0682-3903-4dad-a4a1-3e807492bab4\" (UID: \"93cc0682-3903-4dad-a4a1-3e807492bab4\") " Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.098594 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt" (OuterVolumeSpecName: "kube-api-access-nfvlt") pod "93cc0682-3903-4dad-a4a1-3e807492bab4" (UID: "93cc0682-3903-4dad-a4a1-3e807492bab4"). InnerVolumeSpecName "kube-api-access-nfvlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.194987 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfvlt\" (UniqueName: \"kubernetes.io/projected/93cc0682-3903-4dad-a4a1-3e807492bab4-kube-api-access-nfvlt\") on node \"crc\" DevicePath \"\"" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.289730 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "93cc0682-3903-4dad-a4a1-3e807492bab4" (UID: "93cc0682-3903-4dad-a4a1-3e807492bab4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.296838 4873 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/93cc0682-3903-4dad-a4a1-3e807492bab4-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.500979 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" path="/var/lib/kubelet/pods/93cc0682-3903-4dad-a4a1-3e807492bab4/volumes" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.783864 4873 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhpxg_must-gather-vhpx8_93cc0682-3903-4dad-a4a1-3e807492bab4/copy/0.log" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.784867 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhpxg/must-gather-vhpx8" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.784938 4873 scope.go:117] "RemoveContainer" containerID="767ace8d23069d52fe292289a53031e59a4a02afb307e1f96188d5747c12d9df" Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.794077 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerStarted","Data":"c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b"} Feb 19 11:28:49 crc kubenswrapper[4873]: I0219 11:28:49.837318 4873 scope.go:117] "RemoveContainer" containerID="ff3687756e2207c400bf2bbbc9410e3f3ee429ff1507157450e53ef945e6af06" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.161672 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w"] Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162860 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="extract-content" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162873 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="extract-content" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162892 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="gather" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162899 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="gather" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162911 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162921 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162937 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="extract-utilities" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162943 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="extract-utilities" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162954 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="extract-content" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162960 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="extract-content" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162972 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="copy" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.162978 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="copy" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.162994 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="extract-utilities" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163001 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="extract-utilities" Feb 19 11:30:00 crc kubenswrapper[4873]: E0219 11:30:00.163014 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163019 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163292 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="gather" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163306 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f11d216-e951-49a8-9728-9348dd3e09ab" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163318 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cc0682-3903-4dad-a4a1-3e807492bab4" containerName="copy" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.163325 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9e14ea-5f94-4828-9b90-06e1b92b6e87" containerName="registry-server" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.164023 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.167509 4873 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.176636 4873 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.183173 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w"] Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.201707 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txg9x\" (UniqueName: \"kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.201896 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.202137 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.303706 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.303990 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txg9x\" (UniqueName: \"kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.304068 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.304952 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.312284 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.323906 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txg9x\" (UniqueName: \"kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x\") pod \"collect-profiles-29525010-drf9w\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:00 crc kubenswrapper[4873]: I0219 11:30:00.505584 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:01 crc kubenswrapper[4873]: I0219 11:30:01.002685 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w"] Feb 19 11:30:01 crc kubenswrapper[4873]: I0219 11:30:01.781595 4873 generic.go:334] "Generic (PLEG): container finished" podID="3f7d66fd-cac8-4542-8d90-2e1604173795" containerID="49ec155c61c455c5434148fd267e5f053f30dc96bc25a2a99c29dba3e6e8a1b1" exitCode=0 Feb 19 11:30:01 crc kubenswrapper[4873]: I0219 11:30:01.781686 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" event={"ID":"3f7d66fd-cac8-4542-8d90-2e1604173795","Type":"ContainerDied","Data":"49ec155c61c455c5434148fd267e5f053f30dc96bc25a2a99c29dba3e6e8a1b1"} Feb 19 11:30:01 crc kubenswrapper[4873]: I0219 11:30:01.782766 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" event={"ID":"3f7d66fd-cac8-4542-8d90-2e1604173795","Type":"ContainerStarted","Data":"e3c7023fa3a3faa12fbb82e63d2c17d79d083ae336289d86303fd9d583194477"} Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.130364 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.277085 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txg9x\" (UniqueName: \"kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x\") pod \"3f7d66fd-cac8-4542-8d90-2e1604173795\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.277397 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume\") pod \"3f7d66fd-cac8-4542-8d90-2e1604173795\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.277489 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume\") pod \"3f7d66fd-cac8-4542-8d90-2e1604173795\" (UID: \"3f7d66fd-cac8-4542-8d90-2e1604173795\") " Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.278089 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f7d66fd-cac8-4542-8d90-2e1604173795" (UID: "3f7d66fd-cac8-4542-8d90-2e1604173795"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.289318 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f7d66fd-cac8-4542-8d90-2e1604173795" (UID: "3f7d66fd-cac8-4542-8d90-2e1604173795"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.292626 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x" (OuterVolumeSpecName: "kube-api-access-txg9x") pod "3f7d66fd-cac8-4542-8d90-2e1604173795" (UID: "3f7d66fd-cac8-4542-8d90-2e1604173795"). InnerVolumeSpecName "kube-api-access-txg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.379536 4873 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7d66fd-cac8-4542-8d90-2e1604173795-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.379571 4873 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7d66fd-cac8-4542-8d90-2e1604173795-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.379581 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txg9x\" (UniqueName: \"kubernetes.io/projected/3f7d66fd-cac8-4542-8d90-2e1604173795-kube-api-access-txg9x\") on node \"crc\" DevicePath \"\"" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.803315 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" event={"ID":"3f7d66fd-cac8-4542-8d90-2e1604173795","Type":"ContainerDied","Data":"e3c7023fa3a3faa12fbb82e63d2c17d79d083ae336289d86303fd9d583194477"} Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.803352 4873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c7023fa3a3faa12fbb82e63d2c17d79d083ae336289d86303fd9d583194477" Feb 19 11:30:03 crc kubenswrapper[4873]: I0219 11:30:03.803376 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525010-drf9w" Feb 19 11:30:04 crc kubenswrapper[4873]: I0219 11:30:04.219072 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6"] Feb 19 11:30:04 crc kubenswrapper[4873]: I0219 11:30:04.231216 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-7h5c6"] Feb 19 11:30:05 crc kubenswrapper[4873]: I0219 11:30:05.847039 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3672337-92bc-4e97-9c9e-c0a7e7cd284b" path="/var/lib/kubelet/pods/e3672337-92bc-4e97-9c9e-c0a7e7cd284b/volumes" Feb 19 11:30:48 crc kubenswrapper[4873]: I0219 11:30:48.240279 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:30:48 crc kubenswrapper[4873]: I0219 11:30:48.242574 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:31:04 crc kubenswrapper[4873]: I0219 11:31:04.487555 4873 scope.go:117] "RemoveContainer" containerID="54b95d1d4eacbeaa7320e5a5833d0056b13e15ee90dc6b62a2553c6f88d2fff8" Feb 19 11:31:18 crc kubenswrapper[4873]: I0219 11:31:18.240339 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:31:18 crc kubenswrapper[4873]: I0219 11:31:18.241259 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:31:48 crc kubenswrapper[4873]: I0219 11:31:48.240475 4873 patch_prober.go:28] interesting pod/machine-config-daemon-qmsl7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:31:48 crc kubenswrapper[4873]: I0219 11:31:48.241165 4873 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:31:48 crc kubenswrapper[4873]: I0219 11:31:48.241225 4873 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" Feb 19 11:31:48 crc kubenswrapper[4873]: I0219 11:31:48.241997 4873 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b"} pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:31:48 crc kubenswrapper[4873]: I0219 11:31:48.242049 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" containerName="machine-config-daemon" containerID="cri-o://c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" gracePeriod=600 Feb 19 11:31:48 crc kubenswrapper[4873]: E0219 11:31:48.361897 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:31:49 crc kubenswrapper[4873]: I0219 11:31:49.164060 4873 generic.go:334] "Generic (PLEG): container finished" podID="8c61760e-2955-4688-b68b-1ceeda73f356" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" exitCode=0 Feb 19 11:31:49 crc kubenswrapper[4873]: I0219 11:31:49.164193 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" event={"ID":"8c61760e-2955-4688-b68b-1ceeda73f356","Type":"ContainerDied","Data":"c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b"} Feb 19 11:31:49 crc kubenswrapper[4873]: I0219 11:31:49.164277 4873 scope.go:117] "RemoveContainer" containerID="94efb2b7f91f96b952ade76f54d8cf096f4da7e422f2f8758aaf2ca9208fbda3" Feb 19 11:31:49 crc kubenswrapper[4873]: I0219 11:31:49.164995 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:31:49 crc kubenswrapper[4873]: E0219 11:31:49.165393 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.878065 4873 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:31:50 crc kubenswrapper[4873]: E0219 11:31:50.880004 4873 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7d66fd-cac8-4542-8d90-2e1604173795" containerName="collect-profiles" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.880144 4873 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7d66fd-cac8-4542-8d90-2e1604173795" containerName="collect-profiles" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.880508 4873 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7d66fd-cac8-4542-8d90-2e1604173795" containerName="collect-profiles" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.885408 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.898917 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.971615 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.972084 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sps5g\" (UniqueName: \"kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:50 crc kubenswrapper[4873]: I0219 11:31:50.972312 4873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.074293 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.074995 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.076177 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sps5g\" (UniqueName: \"kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.076552 4873 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.076963 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.095608 4873 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sps5g\" (UniqueName: \"kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g\") pod \"certified-operators-f2mgl\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.214018 4873 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:31:51 crc kubenswrapper[4873]: I0219 11:31:51.770555 4873 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:31:52 crc kubenswrapper[4873]: I0219 11:31:52.196494 4873 generic.go:334] "Generic (PLEG): container finished" podID="22781893-e25a-43dc-b961-51629986957a" containerID="b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3" exitCode=0 Feb 19 11:31:52 crc kubenswrapper[4873]: I0219 11:31:52.196620 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerDied","Data":"b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3"} Feb 19 11:31:52 crc kubenswrapper[4873]: I0219 11:31:52.196877 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerStarted","Data":"adfcfff4241ec7bc41746576adca7849a23b1df91e3ca1dd5f41fd3702cfc668"} Feb 19 11:31:52 crc kubenswrapper[4873]: I0219 11:31:52.201027 4873 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:31:53 crc kubenswrapper[4873]: I0219 11:31:53.210801 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerStarted","Data":"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7"} Feb 19 11:31:55 crc kubenswrapper[4873]: I0219 11:31:55.232678 4873 generic.go:334] "Generic (PLEG): container finished" podID="22781893-e25a-43dc-b961-51629986957a" containerID="21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7" exitCode=0 Feb 19 11:31:55 crc kubenswrapper[4873]: I0219 11:31:55.232767 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerDied","Data":"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7"} Feb 19 11:31:56 crc kubenswrapper[4873]: I0219 11:31:56.245955 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerStarted","Data":"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc"} Feb 19 11:31:56 crc kubenswrapper[4873]: I0219 11:31:56.268731 4873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f2mgl" podStartSLOduration=2.793169823 podStartE2EDuration="6.268710229s" podCreationTimestamp="2026-02-19 11:31:50 +0000 UTC" firstStartedPulling="2026-02-19 11:31:52.200822276 +0000 UTC m=+6421.490253914" lastFinishedPulling="2026-02-19 11:31:55.676362672 +0000 UTC m=+6424.965794320" observedRunningTime="2026-02-19 11:31:56.264673487 +0000 UTC m=+6425.554105145" watchObservedRunningTime="2026-02-19 11:31:56.268710229 +0000 UTC m=+6425.558141867" Feb 19 11:32:01 crc kubenswrapper[4873]: I0219 11:32:01.214815 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:01 crc kubenswrapper[4873]: I0219 11:32:01.215282 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:01 crc kubenswrapper[4873]: I0219 11:32:01.274065 4873 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:01 crc kubenswrapper[4873]: I0219 11:32:01.356535 4873 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.250703 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.327827 4873 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f2mgl" podUID="22781893-e25a-43dc-b961-51629986957a" containerName="registry-server" containerID="cri-o://08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc" gracePeriod=2 Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.484972 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:32:03 crc kubenswrapper[4873]: E0219 11:32:03.485751 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.798387 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.863008 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content\") pod \"22781893-e25a-43dc-b961-51629986957a\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.863258 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities\") pod \"22781893-e25a-43dc-b961-51629986957a\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.863361 4873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sps5g\" (UniqueName: \"kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g\") pod \"22781893-e25a-43dc-b961-51629986957a\" (UID: \"22781893-e25a-43dc-b961-51629986957a\") " Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.864088 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities" (OuterVolumeSpecName: "utilities") pod "22781893-e25a-43dc-b961-51629986957a" (UID: "22781893-e25a-43dc-b961-51629986957a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.869460 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g" (OuterVolumeSpecName: "kube-api-access-sps5g") pod "22781893-e25a-43dc-b961-51629986957a" (UID: "22781893-e25a-43dc-b961-51629986957a"). InnerVolumeSpecName "kube-api-access-sps5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.965326 4873 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:32:03 crc kubenswrapper[4873]: I0219 11:32:03.965385 4873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sps5g\" (UniqueName: \"kubernetes.io/projected/22781893-e25a-43dc-b961-51629986957a-kube-api-access-sps5g\") on node \"crc\" DevicePath \"\"" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.339997 4873 generic.go:334] "Generic (PLEG): container finished" podID="22781893-e25a-43dc-b961-51629986957a" containerID="08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc" exitCode=0 Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.340040 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerDied","Data":"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc"} Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.340046 4873 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2mgl" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.340071 4873 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2mgl" event={"ID":"22781893-e25a-43dc-b961-51629986957a","Type":"ContainerDied","Data":"adfcfff4241ec7bc41746576adca7849a23b1df91e3ca1dd5f41fd3702cfc668"} Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.340091 4873 scope.go:117] "RemoveContainer" containerID="08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.359280 4873 scope.go:117] "RemoveContainer" containerID="21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.382241 4873 scope.go:117] "RemoveContainer" containerID="b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.426049 4873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22781893-e25a-43dc-b961-51629986957a" (UID: "22781893-e25a-43dc-b961-51629986957a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.431903 4873 scope.go:117] "RemoveContainer" containerID="08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc" Feb 19 11:32:04 crc kubenswrapper[4873]: E0219 11:32:04.432493 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc\": container with ID starting with 08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc not found: ID does not exist" containerID="08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.432542 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc"} err="failed to get container status \"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc\": rpc error: code = NotFound desc = could not find container \"08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc\": container with ID starting with 08f172af911f92e58eb66d1cf87d64ac45aba05754469b3bc18cf51d2d6c67bc not found: ID does not exist" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.432576 4873 scope.go:117] "RemoveContainer" containerID="21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7" Feb 19 11:32:04 crc kubenswrapper[4873]: E0219 11:32:04.433040 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7\": container with ID starting with 21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7 not found: ID does not exist" containerID="21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.433075 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7"} err="failed to get container status \"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7\": rpc error: code = NotFound desc = could not find container \"21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7\": container with ID starting with 21ef8368140c82be96b1a925b2aff609721a32bfa56b1096e8f028617eeee3f7 not found: ID does not exist" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.433094 4873 scope.go:117] "RemoveContainer" containerID="b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3" Feb 19 11:32:04 crc kubenswrapper[4873]: E0219 11:32:04.433516 4873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3\": container with ID starting with b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3 not found: ID does not exist" containerID="b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.433549 4873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3"} err="failed to get container status \"b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3\": rpc error: code = NotFound desc = could not find container \"b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3\": container with ID starting with b2eda7221fe7f44826c7503e51dd624ab541a1643b0ad04e32da7864d6f8dab3 not found: ID does not exist" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.478664 4873 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22781893-e25a-43dc-b961-51629986957a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.674739 4873 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:32:04 crc kubenswrapper[4873]: I0219 11:32:04.684665 4873 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f2mgl"] Feb 19 11:32:05 crc kubenswrapper[4873]: I0219 11:32:05.498949 4873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22781893-e25a-43dc-b961-51629986957a" path="/var/lib/kubelet/pods/22781893-e25a-43dc-b961-51629986957a/volumes" Feb 19 11:32:18 crc kubenswrapper[4873]: I0219 11:32:18.484536 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:32:18 crc kubenswrapper[4873]: E0219 11:32:18.485298 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:32:33 crc kubenswrapper[4873]: I0219 11:32:33.484152 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:32:33 crc kubenswrapper[4873]: E0219 11:32:33.485223 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:32:46 crc kubenswrapper[4873]: I0219 11:32:46.484055 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:32:46 crc kubenswrapper[4873]: E0219 11:32:46.484804 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:32:59 crc kubenswrapper[4873]: I0219 11:32:59.485188 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:32:59 crc kubenswrapper[4873]: E0219 11:32:59.486503 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356" Feb 19 11:33:13 crc kubenswrapper[4873]: I0219 11:33:13.485158 4873 scope.go:117] "RemoveContainer" containerID="c8e5f4205b3e7c0c1ae2128e3fdd0d50b895c85b98150f05f591e71e890b4f5b" Feb 19 11:33:13 crc kubenswrapper[4873]: E0219 11:33:13.486277 4873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qmsl7_openshift-machine-config-operator(8c61760e-2955-4688-b68b-1ceeda73f356)\"" pod="openshift-machine-config-operator/machine-config-daemon-qmsl7" podUID="8c61760e-2955-4688-b68b-1ceeda73f356"